okay let's back up and remember that you can't compete with a plane that crashes all the time. So the reason is clearly not profit, in fact they will lose a big chunk of money from this and that wasn't hard to predict at all.
I think it's a mistake to imply the managers making this decision had our hindsight knowledge. Of course if they knew this would happen they would've taken a different course, for profits and other reasons.
To your larger point, I think as systems get more complex it's much more difficult for management to make accurate risk/benefit decisions. Take the Shuttle Challenger disaster. In Feynman's report, the management estimated something along the lines of a 1-in-100,000 chance of catastrophic failure. I think the actual number was ultimately reported around 1-in 1,000. The exact numbers from memory may be off, but the point is as systems get highly complex, increased interfaces and interactions lead to more failure modes. Understanding them all is really tough