Quoth TFA:
"So while we gain quality from the bugs fixed, we lose quality when adding all the new features. For some programs where the functionality is fixed, for example unix commands like cat, grep or ls, I imagine they have become very stable, because eventually all of the bugs have been found and fixed."
For me, this is the greatest insight that is glossed over. The Unix Philosophy of reducing a program into a system of small composable parts which have a specific and FIXED functionality does more to reduce bugs and stop program rot (via unending features) than reams of tests on a constantly shifting codebase.
The problem with monolisthic codebases, even with a test framework, is that you still cant guarantee that one change in one part wont adversely affect another part. Process isolation at least limits bugs to "passing the wrong message".
I think the paragraph on an antifragile career is probably the most important part. Most people can probably intuitively realize that used (and repaired) software is likely to be more correct. Frequently switching jobs has the possible stigma of "job-hopping", and it's hard _for the individual_.
Staying in the same technical stack is the fastest way to become irrelevant I can imagine. I've seen what it is like to switch everything up every few years (4 different tech stacks in 4 jobs), and the experience gained is invaluable.
The best part is seeing everything you knew from before change, except that parts that actually are beneficial. It's funny how much of a tech stack is either cargo culting or in response to a local pain point. Learning to distinguish those gives you a sharp eye, both for possible other fixes locally and for strategic direction changes.
It takes one critical ingredient though: you. You must have the humility and maturity to realize YOU ARE NOT YOUR CODE. If you want to go to a company, learn all they know, and leave, you must be comfortable being wrong, listening a lot, and keeping an open mind. Every time you make fun of a local company's choices (even if they are badly made), you'll isolate yourself more, and they will be less likely to bother explaining the next thing you are stuck on. Remember, don't complain, just build.
And whatever you do, STOP CARING ABOUT CODE FORMATTING STYLES. If you care, you'll hate every one but your "first", and you'll spend your entire social capital at each job trying to change theirs to match your favorite. Which is just about the stupidest thing you could do.
> I've seen what it is like to switch everything up every few years (4 different tech stacks in 4 jobs), and the experience gained is invaluable.
Is it even possible to become really proficient in 4 tech stacks in such a short period?
I'm a pretty fast learner and yet I feel that for actual mastery you'd need to be doing something for at least multiple years. I didn't feel I was any good at C until I had put almost a decade into the language.
Which is tough in the tech world because quite a few technologies won't even last long enough to get to 'master' level! Easy come, easy go. I'm having a hard time adapting to this Cambrian explosion of technology that for the most part seems to lack staying power. I guess that makes me a dinosaur... (or a Lizard!).
The period was 6 years, through Java, Perl, PHP, VB.NET, and C#. The databases were SQLServer, Mysql, Access, Informix, and Oracle. Servers were just Apache and IIS.
I'd not claim mastery in any of those stacks, but then again I've only been programming for six years! I'm not sure I'd claim mastery if I'd stayed in Java and Apache this whole time either. I do think I've seen enough that "coming up to speed" only takes me a fraction of what it used to.
Caring about code formatting is not binary. I've seen code that literally breaks syntax highlight in vim, and I've accepted that sometimes in this job I just have to do syntax-correction commits in order to be able to efficiently work. I'm not a perfectionist about that, but there are limits.
> Every time you make fun of a local company's choices (even if they are badly made), you'll isolate yourself more
I'd say this doesn't only apply to local companies.
Just assume that people you work with are smarter than you.
There's a big difference to styling your code "when in Rome" like the code around it, and fighting for each team you visit to change their style to match your preferences.
I like to phrase this as "style doesn't matter, consistent style does".
It's important to have consistent style, because once a programmer internalizes the local style - some types of bugs become transparent: the upon understanding what something is supposed to be doing, the style provides visual cues to the flows and a certain predictability in the code. When something is wrong it looks wrong.
Yes, indeed. I think consistency within one file is a good rule of thumb.
As an aside, are there any editors that could automagically infer formatting rules (to use for indenting, folding, etc.) upon opening a file? I'm using vim but I think I'd be willing to switch for such a feature.
No arguing with that. Just wanted to water down your statement a bit—one can care about formatting style and respect others' choices at the same time. :)
For the author/submittor, or any one interested, here is a technical list which contains references to papers related to "Fourth Quadrant"(Taleb) ideas and has a subsection for computer science.
Two random titles
'Simoncini, L. (2011). Socio-technical complex systems of systems: Can we justifiably trust their resilience? Dependable and Historic Computing, pages 486–497.'
'Reeves, J., Eveleigh, T., Holzer, T., and Sarkani, S. (2012). The impact of early design phase risk identification biases on space system project performance. In Systems Conference (SysCon), 2012 IEEE International, pages 1–8. IEEE.'
I'm less interested in Antifragility applied to software development and more towards software systems themselves. After all, if the software isn't antifragile, it doesn't matter what development process you're using. There are only a few well known software systems that I believe exemplify the closest thing to antifragility we have in software: bittorrent, bitcoin, tor - basically any software system that counts on failure and is designed for a global scale network of nodes. These systems operate more like bacteria, and the protocols design for byzantine failures and uncooperative nodes.
Most software we build is not designed for chaos, and is thus fragile. There are lots of network distributed systems that might improve as you add nodes, but most of these rely on cooperative peers and have difficulty tolerating global scale networks and byzantine failures.
There's something to be said about Erlang's "let it crash" way of doing things here, but I'm going to abdicate responsibility for writing it in favor of someone whose mind is a bit more awake at the moment.
I also thought of software when reading Antifragile, but my take on it is slightly different.
For one, software per se is extremely fragile. Variability in inputs can easily break it. New browser version, someone presses buttons in a different order, Y2K, and everything comes crashing down.
However, the overall system that includes the software and the team maintaining it can be antifragile, but it's not automatic. The team is like the immune system that fights off bugs; if it doesn't follow good practices, it can lose its ability to do so. For example, a policy of writing automated tests against bugs to make sure that do not reappear is antifragile (like creating antibodies).
Another relevant point: nothing is antifragile without running an energy deficit. It costs money to keep software working, even without adding any features.
"The way I see software, it gets better with use. It is not self-healing, but given that somebody is fixing the bugs that are found (which is often the case), then software keeps getting better with use."
It's got to be more nuanced than that. For example, software won't benefit from bugs being fixed if the rate of churn is very high, if new features are added rapidly and the size of the state space explodes. Times like that create technical debt. Surely technical debt isn't antifragile. Since we still don't know how to avoid it, it seems a stretch to say all software is antifragile.
All software is not anti fragile (projects die), but I think the system-of-all-software likely is, in much the same way that modern server clusters are built of individually fallible servers. It would be very difficult to kill Debian, for example, no matter what issues are found in individual packages.
> "The counterbalancing force here is the constant flow of new features added. This is the normal state for most successful software - we want it to do more things. So while we gain quality from the bugs fixed, we lose quality when adding all the new features"
Right. So the headline is like saying, if not for gravity we'd all be flying. Someone's often fixing bugs, but that someone's also adding features. No generalization about all software may be made on this basis.
The fragile-making disease is feature-creep. "Cat -n considered harmful".
Simple one-off programs designed at the keyboard can be so useful they get re-used and enhanced with more one-off hacks until they reach the point of utter incomprehensibility.
For me, this is the greatest insight that is glossed over. The Unix Philosophy of reducing a program into a system of small composable parts which have a specific and FIXED functionality does more to reduce bugs and stop program rot (via unending features) than reams of tests on a constantly shifting codebase.
The problem with monolisthic codebases, even with a test framework, is that you still cant guarantee that one change in one part wont adversely affect another part. Process isolation at least limits bugs to "passing the wrong message".