Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In a converse reply, Microsoft has done very well financially by holding on to backwards compatibility.

In the OSS world we tend to look at rewrites as "This is for me, who gives a shit about the customer", but most customer (paying) facing software there are the expectations of Do not break the application, and do not break the customers expectations.

This said, X was and is a mess.



My point isn't so much that ditching the backwards compatibility is always right, even when it lets you shed lots of code, but that you can often massively reduce code size that way. Whether that's the right thing to do is often a fine balance, because it depends a great deal on how many people actually care about the old ways, and sometimes people will get it very wrong in either direction.

Sometimes the 10x more code is actually worth it. But you should be aware when it is 10x more code, and make sure it is worth it.


Rewriting the code isn't the hard part...

The QA is the hard part.

Developer: "Rewriting this line of code shouldn't change anything"

Narrator: It did


Drastically reducing the codebase if anything tends to make QA a lot easier ;) (though depending on your needs you may or may not like the outcome)


You can hold on to compatibility by providing a simple 1bit enabled api from the 70s, 8bit linear mode with palette with scrolling and palette oriented api, then a 3d api on argb 32bits. 3 simple api may be simpler to maintain than 1 unified api.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: