The launch could have gone right, and no one would have known anything about the decision process besides a few insiders. I am sure that on project as complex and as risky as a Space Shuttle, there is always an engineer that is not satisfied with some aspect, for some valid reason. But at some point, one needs to launch the thing, despite the complains. How many projects luckily succeeded after a reckless decision?
In many accidents, we can point at an engineer who foreshadowed it, as it is the case here. Usually followed by blaming those who proceeded anyways. But these decision makers are in a difficult position. Saying "no" is easy and safe, but at some point, one needs to say "yes" and take risks, otherwise nothing would be done. So, whose "no" to ignore? Not Allan's apparently.
I used to run the nuclear power plant on a US Navy submarine. Back around 2006, we were sailing somewhere and Sonar reported that the propulsion plant was much, much louder than normal. A few days later we didn't need Sonar to report it, we could hear it ourselves. The whole rear half of the ship was vibrating. We pulled into our destination port, and the topside watch reported that oil pools were appearing in the water near the rear end of the ship. The ship's Engineering Officer and Engineering Department Master Chief shrugged it off and said there was no need for it to "affect ship's schedule". I was in charge of the engineering library. I had a hunch and I went and read a manual that leadership had probably never heard of. The propeller that drives the ship is enormous. It's held in place with a giant nut, but in between the nut and the propeller is a hydraulic tire, a toroidal balloon filled with hydraulic fluid. Clearly it had ruptured. The manual said the ship was supposed to immediately sail to the nearest port and the ship was not allowed to go back out to sea until the tire was replaced. I showed it to the Engineer. Several officers called me in to explain it to them. And then, nothing. Ship's Schedule was not affected, and we continued on the next several-week trip. Before we got to the next port, we had to limit the ship's top speed to avoid major damage to the entire propulsion plant. We weren't able to conduct the mission we had planned because the ship was too loud. And the multiple times I asked what the hell was going on, management literally just talked over me. When we got to the next port, we had to stay there while the propeller was removed and remachined. Management doesn't give a shit as long as it doesn't affect their next promotion.
Don't even get me started on the nuclear safety problems.
The correct answer in that case is to go to the Inspector General. That's what they're there for. Leaders sweeping shit under the rug that ends up crippling a fleet asset and preventing tasking from higher is precisely the kind of negligence and incompetence the IG is designed to root out.
The IG is an independent entity which exists to investigate misconduct and fraud/waste/abuse. There are Inspectors General at all levels from local bases up to the Secretary of Defense, and they have confidential reporting hotlines. The only thing worse for a commander than having shenanigans be substantiated at an IG investigation is to have been found to tolerate retaliation against the reporters.
Generally about every month or two, a Navy commanding officer gets canned for "loss of confidence in his/her ability to command." They aren't bulletproof, quite the opposite. And leaving out cases of alcohol misuse and/or sexual misconduct, other common causes are things within the IG's purview.
Then Person A needs to haul their butt to the Defense Service Office, call their Member of Congress, and tell the "anonymous" hotline that they've been retaliated against.
I'm not pretending this is some magic ticket to puppy-rainbow-fairy land where retaliation never occurs, but ultimately, how much do you care about your shipmates? I had a CPO once as one of my direct reports committing major misconduct and threatening my shop with retaliation if they reported it. I could have helped crush the bastard if someone had come forward to me, but no one ever did until I'd turned over the division to someone else, after which it blew up. Sure, he eventually got found out, but still. He was a great con artist and he pulled the wool over my eyes, but all I'd have needed is one person cluing me in to that snake.
Speaking from the senior officer level, we're not all some cabal trying to sweep shit under the rug. And the IGs, as much as they're feared, aren't out to nail people to the wall who haven't legitimately done bad things. I'm sorry you've had the experience you've had, but that doesn't mean that everyone above you was some big blue wall willing to protect folks who've done wrong.
heck, you're in the ship too. I'll take all the retalitation if I get to keep breathing. If they wanna kick me out over saving my own skin, fine. Saves me from deserting.
It sounds like a certain commercial aircraft manufacturer that starts with a B and ends with an oeing could really use an effective Inspector General system.
I seriously believe what I've heard about upwards failure. Being competent seems to be an impediment, and the goons at the very top are ludicrously malformed people.
Atlas Shrugged? The book written by that demented woman who couldn't deal with her own feelings but told everyone how individualism was the answer to everything while living thanks to other people's support?
i guess the true fate is the competent arguing amongst one another in an attempt to establish who is most competent, while the incompetent group together and bask in the real rewards. The goals of the incompetent are simple and tangible. The goals of the competent are abstract, as they seek acceptance from their fellow competent peers
Objectivism: that fart-huffing philosophy that leads people to think everyone else is incompetent to judge it, when it's just a bunch of hateful trash that is to the right as Marxism is to the left.
How long retired? Things have gone in what can only be described as an.. incomprehensible unfathomable direction in the last decade or so. Parent post is not surprising in the least.
Less funny in real life. Sometimes the jizzless thing falls off with impeccably bad timing. Right when things go boom. People get injured (no deaths yet). Limp home early. Allies let down. Shipping routes elongate by a sad multiple. And it even affects you directly as you pay extra for that Dragon silicon toy you ordered from China.
The Navy's careerist, bureaucratic incompetence is staggering. No better than Putin's generals who looted the military budget and crippled his army so they couldn't even beat a military a fraction of their size.
Recently. For those who've served, it's not a surprise to see the constant drumbeat of commanding officers being relieved of command every month or so. COs are not bulletproof, and the last thing anyone in the seat wants is to end up crossways with the IG. And there are confidential ways Sailors can get in touch with them if needed.
Or with their Member of Congress, who can also go to Big Navy and ask "WTF is going on with my constituent?"
> Don't even get me started on the nuclear safety problems.
I want to be pro-nuclear energy, but I just don't think I can trust the majority of human institutions to handle nuclear plants.
What do you think about the idea of replacing all global power production with nuclear, given that it would require many hundreds of thousands of loosely-supervised people running nuclear plants?
There's also the issue of force majeure - war, terrorism, natural disasters, and so on. Increase the number of these and not only can you not really maintain the same level of diligence, but you also increase the odds of them ending up in an unfortunate location or event.
There's also the issue of the uranium. Breeder reactors can help increase efficiency, but they bump up all the complexities/risks greatly. Relatively affordable uranium is a limited resource. We have vast quantities of it in the ocean, but it's not really feasible to extract. It's at something like 3.3 parts per billion by mass. So you'd need to filter a billion kg of ocean water to get 3.3kg of uranium. Outside of cost/complexity, you also run into ecological issues at that scale.
Given the scale of people killed by coal every year, I feel relatively confident that had that effort not been undertaken, it would still be true.
And of course that's ignoring the fact that I also feel relatively confident that a Chernobyl scale accident every year is in no way likely, even if the entire world was 100% on nuclear
I don't think the scale of coal is 200m+ people a year. That's taking artistic liberties or is too hyperbolic to entertain.
>I also feel relatively confident that a Chernobyl scale accident every year is in no way likely, even if the entire world was 100% on nuclear
I don't. Einstein's quote rings alarms in my head here. Imagine all the inane incompetencies you've seen with current energies in your house, or at a mechanic, or simply flickering lights at a resaurant. Now imagine that these people now manage small fusion/fission bombs powering such devices.
we need to value labor a lot more to trust that sort of maintanance. And the US alone isn't too good at that. Let alone most of Asia and EMEA.
In any case if we look at the actual data nuclear has been extremely safe compared to burning fossil fuels. Add up all the nuclear disasters that have ever happened and adjusted by MWh generated it’s a few magnitudes safer than coal.
> Now imagine that these people now manage small fusion/fission bombs powering such devices.
Sure, they’ll have to be trained to the same standards as current nuclear engineers. Not trivial but obviously not exactly an unsolvable problem..
> Let alone most of Asia and EMEA.
Sorry but you’re just saying random things at this point..
Certainly, they still breathe the same air, don’t they?
> Nuclear meltdown does.
I’m pretty sure that nuclear meltdowns are much, much easier to avoid. Even in Chernobyl almost all the casualties (shortterm and longterm) were amongst people directly handling and trying to contain a disaster. If you’re rich you’re unlikely to be a fireman..
There was no hunch there about a problem, it was very obvious there was a problem. Management willing to risk worker's lives for promotions should be fired immediately unless they jump into the fire only by themselves. No life is worth someone's convenience.
If you're EB, why replace a hydraulic bushing when you can wait, and replace it but also have to repair a bunch of damage and make yourself a nice big extra chunk of change off Uncle Sam?
If you're ship's captain...why not help secure a nice 'consulting' 'job' at EB after retiring from the navy by helping EB make millions, and count on your officers to not say a peep to fleet command that the mess was preventable?
Is this a different phenomenon though? It seems that there's a difference between an informed risk assessment and not giving a fuck or letting the bureaucratic gears turn and not feeling responsible. Like there's a difference between Challenger and Chernobyl.
But, maybe someone can make a case that it's fundamentally the same thing?
I would make the case that it's fundamentally the same thing.
In both cases, there were people who cared primarily about the technical truth, and those people were overruled by people who cared primarily about their own lifestyle (social status, reputation, career, opportunities, loyalties, personal obligations, etc.) In Allan McDonald's book "Truth, Lies, and O-Rings" he outlines how Morton Thiokol was having a contract renewal held over their head while NASA Marshall tried to maneuver the Solid Rocket Booster production contract to a second source, which would have seriously affect MT's bottom line and profit margins. There's a strong implication that Morton Thiokol was not able to adhere to proper technical rationale and push back on their customer (NASA) because if they had they would have given too much ammunition to NASA to argue for a second-source for the SRB contracts. (In short: "you guys delayed launches over issues in your hardware, so we're only going to buy 30 SRB flight sets from you over the next 5 years instead of 60 as we initially promised."
I have worked as a NASA contractor on similar issues, although much less directly impacting the crews than the SRBs. You are not free to pursue the smartest, most technically accurate, quickest method for fixing problems; if you introduce delays that your NASA contacts and managers don't like, they will likely ding your contract and redirect some of your company's work to your direct competitors, who you're often working with on your projects.
What’s the alternative? Being able to shift to a competitor when a producer is letting you down is the entire point of private contracts; without that, you might as well remove the whole assemblage of profit and just nationalize the whole thing.
Strictly speaking, you're correct, so I don't disagree with your comment. However, assuming MvDonald's recollections are correct and his explanation of the story is accurate, Morton Thiokol was doing an excellent job. The O-Ring seal issue was on track to be solved as they switched to a lighter-weight filament-wound case. According to McDonald, Morton Thiokol was receiving high marks on their contract evaluations, and Marshall was trying to move the contract to a company that had a lot of ex-Marshall employees.
I think it can be thought from this angle: if the customer is corrupt and the contractor ethical, the project can be unsafe. If the customer is ethical and the contractor corrupt, the project also can be unsafe.
Okay so it sounds like you're saying that they are fundamentally the same, but only because the Challenger wasn't in the "informed risk assessment" category after all.
Yeah, that's what I think. In both cases the technical decisions were made by people who were not technical experts and were completely ignoring the input of the technical experts because of social pressures. Based on McDonald's retelling, the decision to launch the Challenger was anything but an informed risk decision; none of the managers said "we acknowledge Morton Thiokol's concerns about O-Ring temperatures and are committing to launch anyway, with the following rationale: ..." They just didn't bring up the temperature issue at the flight director level and recommended a launch, backed by no data.
In Chernobyl, they scheduled a safety test to satisfy schedules imposed by central command. The plant engineers either weren't informed or couldn't push back because to go against management meant consequences for your career and family, administered by the Soviet authorities or the KGB.
Both scenarios had engineers who were not empowered to disclose or escalate issues to the highest level because of implied threats against them by non-technical authorities.
> Saying "no" is easy and safe, but at some point, one needs to say "yes" and take risks, otherwise nothing would be done.
Saying "no" is easy and safe in a world where there are absolutely no external pressures to get stuff done. Unfortunately, that world doesn't exist, and the decision makers in these kinds of situations face far more pressure to say "yes" than they do to say "no".
For example, see the article:
> The NASA official simply said that Thiokol had some concerns but approved the launch. He neglected to say that the approval came only after Thiokol executives, under intense pressure from NASA officials, overruled the engineers.
Isn't the definition of "easy" or "hard" that includes the external human pressures the less simple/stupid one? What is the utility of a definition of "easy" that assumes that you work in complete isolation?
The context to this conversation is the launch of a space shuttle that's supposed to carry a teacher to space. It has both enormous stakes and enormous political pressure to not delay/cancel. I'm unsure why that context makes the spherical cow version of "easy" a sensible one.
The context of that word "easy" was not a vacuum, it was part of a sentence which was part of a conversation. There is more than enough of this context to know what in particular was easy.
You can only fail to get this by not reading the thing you are responding to, or deliberate obtuseness, or perhaps by being 12 years old.
Easily be career ending? That's a bit dramatic, don't you think?. Someone who continuously says no to things will surely not thrive and probably eventually leave the organization, one way or the other, that's probably right.
Not even slightly dramatic. I have seen someone be utterly destroyed for trying to speak out on something deeply unethical a state was doing, and is probably still doing.
He was dragged by the head of state in the press and televised announcements, became untouchable overnight - lost his career, his wife died a few days later while at work at her government job in an “accident”. This isn’t in some tinpot dictatorship, rather a liberal western democracy.
So - no. Career-ending is an understatement. You piss the wrong people off, they will absolutely fuck you up.
I have long thought that there ought to be an independently funded International Association for the Protection of Whistleblowers. However, it would quickly become a primary target of national intelligence agencies, so I don't know how long it would last.
A "liberal democracy" where the head of state can have random citizens murdered? And I guess despite being an internet anon, you won't name that country because they will come after you and kill your family as well?
That's either a very tall tale or the state is anything but liberal.
> A "liberal democracy" where the head of state can have random citizens murdered?
Abdulrahman Anwar al-Awlaki (also spelled al-Aulaqi, Arabic: عبدالرحمن العولقي; August 26, 1995 – October 14, 2011) was a 16-year-old United States citizen who was killed by a U.S. drone strike in Yemen.
The U.S. drone strike that killed Abdulrahman Anwar al-Awlaki was conducted under a policy approved by U.S. President Barack Obama
Human rights groups questioned why Abdulrahman al-Awlaki was killed by the U.S. in a country with which the United States was not at war. Jameel Jaffer, deputy legal director of the American Civil Liberties Union, stated "If the government is going to be firing Predator missiles at American citizens, surely the American public has a right to know who's being targeted, and why."
>Abdulrahman al-Awlaki's father, Anwar al-Awlaki, was a leader of al-Qaeda in the Arabian Peninsula
Missed highlighting that part. The boy also wasn't the target of the strike anyway. Was the wife from the other user's story living with an al-Qaeda leader as well?
I think the WH spokesperson's response just adds to the level of disturbing:
>When pressed by a reporter to defend the targeted killing policy that resulted in Abdulrahman al-Awlaki's death, former White House press secretary Robert Gibbs deflected blame to the victim's father, saying, "I would suggest that you should have a far more responsible father if they are truly concerned about the well-being of their children. I don't think becoming an al-Qaeda jihadist terrorist is the best way to go about doing your business".
Yeah, the point that Obama literally executed US citizens without trial is often lost on people on this site, and on much of the "liberal" intelligentsia. They'll just say he was a "terrorist", but folks, you can't say whether he was or not, without trial. And even if he was, his son, who was also killed in that strike, was not a "terrorist". This is an extremely slippery slope, and the fact that people don't acknowledge this just because it was Obama who ordered the murder (let's call a spade a spade) is a damning indictment of "neoliberal values".
I’ve spoken about it here somewhat and circumspectly before - but I prefer to keep the SNR low, as I don’t want repercussions for him. Me, good luck finding.
It’s the U.K. It happened under Cameron. It related to the judiciary. That’s as much as I’ll comfortably reveal.
I will also say that it was a factor in me deciding to sell my business, leave the country, and live in the woods, as what I learned from him and his experience fundamentally changed my perception of the system in which we live.
Considering the launch tempo that NASA had signed up for, and was then currently failing at? Yes, a single 'no-go' on the cert chain could easily result in someone being shunted into professional obscurity thereafter.
Can someone explain why every govt official that was ever in the news talking about Snowden acuse him of being the worst sort of criminal? Specifically what is the case, they are never forthcoming about details.
I personally am very glad to know the things he revealed.
For the same reason they’ve been torturing Assange for the past decade. They view us as little more than taxable cattle that should not ask any questions, let alone embarrass or challenge the ruling class.
> at some point, one needs to launch the thing, despite the complains
There's a big difference between "complaints" because something is not optimal, and warnings that something is a critical risk. The Thiokol engineers' warnings about the O-rings were in the latter category.
And NASA knew that. The summer before the Challenger blew up, NASA had reclassified the O-rings as a Criticality 1 flight risk, where they had previously been Criticality 1R. The "1" meant that if the thing happens the shuttle would be lost--as it was. The "R" meant that there was a redundant component that would do the job if the first one failed--in this case there were two O-rings, primary and secondary. But in (IIRC) June 1985, NASA was told by Thiokol that the primary O-ring was not sealing so there was effectively no redundancy, and NASA acknowledged that by reclassifying the risk. But by the rules NASA itself had imposed, a Criticality 1 (rather than 1R) flight risk was supposed to mean the Shuttle was grounded until the issue was fixed. To avoid that, NASA waived the risk right after reclassifying it.
> at some point, one needs to say "yes" and take risks, otherwise nothing would be done
Taking calculated risks when the potential payoff justifies it is one thing. But taking foolish risks, when even your own decision making framework says you're not supposed to, is quite another. NASA's decision to launch the Challenger was the latter.
It happens extremely frequently because there is almost no downside for management to override the engineers decision.
Even in the case of the Challenger, no single article say WHO was the executive that finally approved the launch.
No body was jailed for gross negligence.
Even Ricahrd Feynman felt that the investigative comission was biased from the start.
So, since there is no "price to pay" to make this bad calls they are continuously made.
Jailing people means you'll have a hard time finding people willing to make hard decisions, and when you do, you may find they're not the right people for the job.
Punishing people for making mistakes means very few will be willing to take responsibility.
It will also mean that people will desperately cover up mistakes rather than being open about it, meaning the mistakes do not get corrected. We see this in play where manufacturers won't fix problems because fixing a problem is an admission of liability for the consequences of those problems, and punishment.
Even the best, most conscientious people make mistakes. Jailing them is not going to be helpful, it will just make things worse.
Have you ever made a mistake on the road that luckily did not result in anyone getting killed?
During WW2, a B-19 crash landed in the Soviet Union. The B-29's technology was light-years ahead of Soviet engineering. Stalin demanded that an exact replica of the B-29 be built. And that's what the engineers did. They were so terrified of Stalin that they carefully duplicated the battle damage on the original.
Be careful what you wish for when advocating criminal punishment.
Tu-4 was indeed a very close copy of B-29, but no, they did not "carefully duplicate the battle damage" on the original. The one prominent example of copying unnecessary things that is usually showcased in this instance is a mistakenly drilled rivet hole in one of the wings that was carefully reproduced thereafter despite there not being any evident purpose for it.
That said, even then Tu-4 wasn't a carbon copy. Because US used imperial units for everything, Soviets simply couldn't make it a carbon copy because they could not e.g. source plating and wire of the exact right size. So they replaced it with the nearest metric equivalents that were available, erring on the side of making things thicker, to ensure structural integrity - which also made it a little bit heavier than the original. Even bigger changes were made - for example, Tupolev insisted on using existing Soviet engines (!), weapons, and radios in lieu of copying the American ones. It should be noted that Stalin really did want a carbon copy originally, and Tupolev had to fight his way on each one of those decisions.
We should not blame people for honest mistakes. Challenger was not an honest mistake, it was political pressure overriding engineering. The joints were not supposed to leak at all, yet they were leaking every time and it was being swept under the rug. When someone suddenly demands to get it in writing when it was normally a verbal procedure they *know* there's a problem. That's not a mistake.
Same as the insulation damage to the tiles kept being ignored until Columbia barely survived. And then they fixed the part they blamed for that incident, but the tiles kept coming back damaged.
And look at what else was going wrong that day--the boosters would most likely have been lost at sea if the launch had worked.
From the very start they were obviously in cover-up mode.
They had every engineer involved with the booster saying launching in the cold was a bad idea, yet they started by trying to look at all the ways it could have gone wrong rather than even looking into what the engineers were screaming about.
We also have them claiming a calibration error with the pyrometer (the ancestor of the modern thermometer you point at something) even though that made other numbers not make sense.
I had never seen anyone who is more obviously a psychopath than this guy.
You know that theory that people like that gravitate towards management positions? Yeah... it's this guy. Literally him. Happy to send people into the meat grinder for "progress", even though no actually scientific progress of any import was planned for the Challenger mission. It was mostly a publicity stunt!
My understanding of the Space Shuttle program is that there were a lot of times they knew they probably shouldn't fly, or try to land, and they lucked out and didn't lose the orbiter. It is shocking they only lost two ships out of the 135 Space Shuttle missions.
The safety posture of that whole program, for a US human space program, seemed bad. That they chose to use solid rocket motors shows that they were willing to compromise on human safety from the get-go. There are reasons there hasn't ever been even one other human-rated craft to use solid rocket motors.
> There are reasons there hasn't ever been even one other human-rated craft to use solid rocket motors.
That's about to not be true. Atlas V + starliner has flown two people and has strap-on boosters, I think it only gets the rating once it returns from the test flight though.
The shuttle didn't have a propulsive launch abort system, and could only abort during a percentage of its launch. The performance quoted for starliner's abort motor is "one mile up, and one mile out" based on what the presenter said during the last launch. You're plenty safe as long as you don't intersect the SRB's plume.
I forgot about the SLS until after I wrote that. SLS makes most of the same mistakes, plus plenty of new expensive ones, from the Space Shuttle program. SLS has yet to carry a human passenger though.
Its mind boggling that SLS still exists at all. At least $1B-$2B in costs whether you launch or not. A launch cadence measured in years. $2B-$4B if you actually launch it. And it doesn't even lift more than Starship, which is launching almost quarterly already. This before we even talk about reusability, or that a reusable Starship + Super Heavy launch would only use about $2M of propellent.
A lot of people are taking issue with the fact that you need to say yes for progress. I don’t know how one could always say no and expect to have anything done.
Every kind of meaningful success involves negotiating risk instead of seizing up in the presence of it.
The shuttle probably could have failed in 1,000 different ways and eventually, it would have. But they still went to space with it.
Some risk is acceptable. If I were to go to the moon, let’s say, I would accept a 50% risk of death. I would be happy to do it. Other people would accept a risk of investment and work hour loss. It’s not so black or white that you wouldn’t go if there’s any risk.
The key thing with Challenger is that the engineers working on the project estimated the risk to be extremely high and refused to budge, eventually being overruled by the executives of their company.
That's different than the engineers calculating the risk of failure at some previously-defined-as-acceptable level and giving the go-ahead.
> Some risk is acceptable. If I were to go to the moon, let’s say, I would accept a 50% risk of death. I would be happy to do it. Other people would accept a risk of investment and work hour loss. It’s not so black or white that you wouldn’t go if there’s any risk.
It's possible you're just suicidal, but I'm reading this more as false internet bravado. A 50% risk of death on a mission to space is totally unacceptable. It's not like anyone will die if you don't go now; you can afford to take the time to eliminate all known risks of this magnitude.
Not bravado at all, if I was given those odds today, I would put all my effort into it and go.
There are many people who are ideologically-driven and accept odds of death at 50% or higher — revolutionary fighters, political martyrs, religious martyrs, explorers and adventurers throughout history (including space), environmental activists, freedom fighters, healthcare workers in epidemics of serious disease...
> Not bravado at all, if I was given those odds today, I would put all my effort into it and go.
If that's actually true, you should see a therapist.
Given we have a track record of going to the moon with much lower death rate than 50%, that's a proven higher risk than is necessary. That's not risking your life for a cause, because there's no cause that benefits from you taking this disproportionate risk. It's the heroism equivalent of playing Russian Roulette a little more than 3 times and achieves about as much.
> There are many people who are ideologically-driven and accept odds of death at 50% or higher — revolutionary fighters, political martyrs, religious martyrs, explorers and adventurers throughout history (including space), environmental activists, freedom fighters, healthcare workers in epidemics of serious disease...
And for every one of those there's 100 keyboard cowboys on the internet who have never been within a mile of danger and have no idea how they'll react to it.
I would say I'm more ideologically driven than most, and there are a handful of causes I'd like to think I'd die for. But I'm also self-aware enough to know that it's impossible to know how I'll react until I'm actually in those situations.
And I'll reiterate: you aren't risking your life for a cause, because there's no cause that benefits from you taking a 50% mortality risk on a trip to the moon.
I think you may be projecting, because you are acting a bit like a keyboard warrior — telling others to see therapists. Consider that other people have different views, that is all. To some, the cause (principle/life goal) of exploring where others have not gone is enough.
1. Go where others have not gone, with a 50% risk of death.
2. Wait 5 days for temperatures to rise, and go where others have not gone, with a 0.5% risk of death.
Choosing 1 isn't "different views, that is all", it's pretty objectively the wrong choice. It's not dying for a cause, it's not brave, it's not idealistic. It's pointlessly suicidal. So yes, I'm saying if you think 1 is the right choice you should see a therapist.
Notably, NASA requires all astronauts to undergo psychological evaluation, even if they aren't claiming they'll take insane unnecessary risks. So it's not like I'm the only one who thinks talking to someone before you potentially kill yourself is a good idea.
No offense but this sounds like the sayings of someone who has not ever seen a 50% of death.
It’s a little different 3 to 4 months out. It’s way different the night before and morning.
Stepping “in the arena” with odds like those, I’d say the vast, vast majority will back out and/or break down sobbing if forced.
There’s a small percent who will go forward but admit the fact that they were completely afraid- and rightly so.
Then you have that tiny percentage that are completely calm and you’d swear had a tiny smile creeping in…
I’ve never been an astronaut.
But I did spend three years in and out of Bosnia with a special operations task force.
Honestly? I have a 1% rule. The things might have a 20-30% chance of death of clearly stupid and no one wants to do. Things will a one in a million prob aren’t gonna catch ya. But I figure that if something does, it’s gonna be an activity that I do often but has a 1% chance of going horribly wrong and that I’m ignoring.
> sounds like the sayings of someone who has not ever seen a 50% of death
Well, this sounds like simple ad-hominem. I appreciate your insight, overall, though.
Many ideologically-driven people, like war field medics, explorers, adventurers, revolutionaries, and political martyrs take on very high risk endeavors.
I would also like to explore unknown parts of the Moon despite the risks, even if they were 50%. And I would wholeheartedly try to do it and put myself in the race, if not for a disqualifying condition.
There is also the matter of controllable and uncontrollable risks of death. The philosophy around dealing with them can be quite different. From my experience with battlefield medicine (albeit limited to a few years), I accepted the risks because the cause was worth it, the culture I was surrounded by was to accept these risks, and I could steer them by taking precautions and executing all we were taught. No one among the people I trained with thought they couldn't. And yes, many people ultimately dropped out for it, as did I.
Strapping oneself to a rocket is a very uncontrollable risk. The outcome, from an astronaut's perspective, is more random. I think that offers a certain kind of peace. We are all going to die at random times for random reasons, I think most people make peace with that, especially as they go into old age. That is a more comfortable type of risk for me.
Individuals have different views on mortality. Some are more afraid than others, some are afraid in one set of circumstances but not others. Some think that doing worthwhile things in their lives outweighs the risk of death every time. Your view is valid, but so is others'.
> Stepping “in the arena” with odds like those, I’d say the vast, vast majority will back out and/or break down sobbing if forced.
Something like 10 million people will accept those odds. Let's say 1 million are healthy enough to actually go to space and operate the machinery. Then let's say 99% will back out during the process. That's still 10,000 people to choose from, more than enough for NASA's needs.
> No offense but this sounds like the sayings of someone who has not ever seen a 50% of death.
The space program pilots saw it. And no, I would not have flown on those rockets. After all, NASA would "man rate" a new rocket design with only one successful launch.
Using the space shuttle program as a comparison, because it's easy to get the numbers. There were 13 total deaths (7 from Challenger, 6 from Columbia [0]) during the program. Over 135 missions, the Space Shuttle took 817 people into space. (From [1], the sum of the "Crew" column. The Space Shuttle carried 355 distinct people, but some were on multiple missions.)
So the risk of death could be estimated as 2/135 (fatal flights / total flights) or as 13/817 (total fatalities / total crew). These are around 1.5%, must lower than a 50% chance of death.
This is not to underplay their bravery. This is to state that the level of bravery to face a 1.5% chance of death is extremely high.
If I recall correctly, the Saturn V was man rated after one launch. There were multiple failures on the moon missions that easily could have killed the astronauts.
The blastoff from the moon had never been tried before.
> But at some point, one needs to launch the thing
Do they? Even if risks are not mitigated and say risk for catastrophe can't be pushed below ie 15%? This ain't some app startup world where failure will lose a bit of money and time, and everybody moves on.
I get the political forces behind, nobody at NASA was/is probably happy with those, and most politicians are basically clueless clowns (or worse) chasing popularity polls and often wielding massive decisive powers over matters they barely understand at surface level.
But you can't cheat reality and facts, not more than say in casino.
Maybe it's a bad analogy given the complexity of a rocket launch, but I always think about European exploration of the North Atlantic. Huge risk and loss of life, but the winners built empires on those achievements.
So yes, I agree that at some point you need to launch the thing.
For the ones doing the colonizing? Overwhelmingly yes. A good potion of the issues with colonizing is about how the colonizing nations end up extracting massive amounts of resources for their own benefit.
In context, it sounds like you think that the genocide of indigenous peoples was totally worth it for European nations and that callous lack of concern for human life and suffering is an example to be followed by modern space programs.
I'd like to cut you the benefit of the doubt and assume that's not what you meant; if that's the case, please clarify.
You are not reading the context correctly. The original point was that establishing colonies was very risky, to which whyever implied that colonialism was not a success story. But in fact it was extremely successful from a risk analysis point of view. Some nations chose to risk lives and it paid off quite well for them. The nuance of how the natives were treated is frankly irrelevant to this analysis, because we're asking "did the risk pay off", not "did they do anything wrong".
I am not participating in amoral risk/reward analysis, and you should not be either.
If the cost was genocide or predictable and avoidable astronaut deaths, the risk didn't pay off; there's no risk analysis. This isn't "nuance" and there is no ambiguity here, it's literally killing people for personal gain.
> In context, it sounds like you think that the genocide of indigenous peoples was totally worth it for European nations and that callous lack of concern for human life and suffering is an example to be followed by modern space programs.
Can you provide a quote of where I said this is an example to be followed"? (This is a rhetorical question: I know you can't because I said nothing remotely akin to that.)
> I'd like to cut you the benefit of the doubt and assume that's not what you meant; if that's the case, please clarify.
Sure, to clarify: I meant precisely what I said. I did not mean any of the completely different nonsense you decided to suggest I was actually saying.
If you see "colonization benefited the people doing the colonizing" and interpret it as "colonization is an example to be followed", that's entirely something wrong with your reading comprehension.
You're not "cutting me some slack" by putting words in my mouth and then saying "but maaybe didn't mean that", and it's incredibly dishonesty and shitty of you to pretend you are.
> Can you provide a quote of where I said this is an example to be followed"?
People can read the context of what you said, there's no need to quote it.
In fact, I would advise you to read the context of what you said; if you don't understand why I interpreted your comment the way I did, maybe you should read the posts chain you responded to and that will help you understand.
> Sure, to clarify: I meant precisely what I said. I did not mean any of the completely different nonsense you decided to suggest I was actually saying.
Well, what you said, you said in a context. If you weren't following the conversation, you didn't have to respond, and you can't blame other people for trying to understand your comments as part of the conversation instead of in isolation.
Even if you said what you said oblivious to context, then I have to say, if you meant exactly what you said, then my response is that a risk/reward analysis which only considers economic factors and ignores human factors is reprehensible.
There is not a situation which exists in reality where we should be talking about economic success when human lives are at stake, without considering those human lives. If you want to claim "I wasn't talking about human life", then my response is simply, you should have been talking about human life because the actions you're discussing killed people and that the most important factor in understanding those events. You don't get to say "They took a risk and it paid off!" when the "risk" was wiping out entire populations--that's not a footnote or a minor detail, that's the headline.
The story of the Challenger disaster isn't "they took a risk ignoring engineers and lost reputation with the NASA client"--it's "they risked astronaut's lives to win reputation with the NASA client and ended up killing people". The story of colonizing North America isn't "they took a risk on exploring unknown territories and found massive new sources of resources" it's "they sacrificed the lives of sailors and soldiers to explore unknown territories, and then wiped out the inhabitants and took their resources".
Isn't it fairly obvious from history that you and the Renaissance-era colonizers calculate morality differently? You speak of things that should not be, but nonetheless were. The success of colonialism to the colonizers is obvious. Natives of the New World were regarded as primitives, non-believers, less than human. We see the actions of the European powers as abhorrent now, but 500 years ago they simply did not see things the way we do, and they acted accordingly.
What exactly is your point in the context of this conversation?
I'm a modern person, I have modern morality? Guilty as charged, I guess.
We're supposed to cut them some slack because they were just behaving as people of their time? Nah, I don't think so: there are plenty of examples of people at that time who were highly critical of colonialism and the treatment of indigenous people. If they can follow their moral compass so could Columbus and Cortez. "Everyone else was doing it" is not an excuse adults get to use: people are responsible for their own actions. As for their beliefs: they were wrong.
There are other points you could be making but I really hope you aren't making any of the other ones I can think of.
Obviously I don't know what points you fear I may be making.
What examples were there of anti-colonialism in those times? What influence would they have had over the monarchies and the church of their day? What influence did they exert?
I would contend that the moral compass of Columbus and Cortez was fundamentally different than yours or mine. They were products of a world vastly different than ours. You and I have modern morality; they did not. Since we cannot change the actions of the past, we can only hold them up as examples of how people were, and how they differ from (or are similar to) what we are now.
My complaint is that, to my eyes, you are criticizing them as if we moderns have some power over their actions. How can we expect them to have behaved as we would? We cannot change them or what they did. I'm not sure means "cutting them some slack." They did what they did; we can only observe the consequences and hope to do better.
I agree, their beliefs were wrong. Nonetheless, they believed what their culture taught them to believe. Yes, people of any era are responsible for their own actions, and if they act wrongly according to their culture, they should be punished for it. But if their culture sees no harm in what they are doing, they'll be rewarded. We certainly can't punish or reward them from 500 years in the future. We can only hope that what we believe, and how we act, is better.
> My complaint is that, to my eyes, you are criticizing them as if we moderns have some power over their actions.
We moderns have power over our own actions, and those actions are informed by the past.
In this thread we're talking about risk/reward analyses and for some reason, you and other people here seem oddly insistent that we not discuss the ethical implications of the actions on question.
And all-too-often, that's what happens today: companies look at the risk/reward in financial terms and ignore any ethical concerns. I would characterize the corporate approach to ethics as "complete disregard". The business ethics classes I took in college were, frankly, reprehensible; most of the material was geared toward rebranding various corporate misdeeds as miscalculated risk/reward tradeoffs, similar to what is being done in this thread. This is a huge problem, and it's pervasive in this thread, in HN as a whole, and in corporate culture.
Your complaint is rather hypocritical: given we have no power over their actions, why defend them? Your complaint applies as much to your own position as it does to mine. What problem are you addressing?
> you and other people here seem oddly insistent that we not discuss the ethical implications of the actions on question.
Hmm, I don't think that's my actual intent; only that we discuss them as they apply to modern morality, not as if we can influence them to be different than what they are.
If I defend them (which I don't think I do), I do so to help explain their attitudes and actions, not to excuse them. We need to understand where they are coming from to see the differences between them and us.
Distancing ourselves from historical people is one of the worst possible mistakes we can make when studying history. We aren't different. The entire 10,000 years we've had anything resembling civilization is an evolutionary blip.
The reasons that Columbus tortured, killed, and enslaved indigenous people are the same reasons for Abu Ghraib: racism, lack of oversight, and greed. The exact details have changed, but the underlying causes are alive and thriving.
Thankfully, I think humans as a whole understand these things better and I think things are improving, but if we fail to keep that understanding alive and build upon it, regress is possible. Certainly the startup culture being fostered here (HN) which looks only at profit and de-emphasizes ethics enables this sort of forgetfulness. It's not that anyone intends to cause harm, it's that they can rationalize causing harm if it's profitable. And since money makes the same people powerful, this attitude is an extremely damaging force in society. That's why I am so insistent that we not treat ethics as a side-conversation.
I think ultimately the problem is of accountability
If the risks are high and there are a lot of warning signs, there needs to be strong punishment for pushing ahead anyways and ignoring the risk
It is much too often that people in powerful positions are very cavalier with the lives or livelihoods of many people they are supposed to be responsible for, and we let them get away with being reckless far too often
> Maybe it's a bad analogy given the complexity of a rocket launch, but I always think about European exploration of the North Atlantic. Huge risk and loss of life, but the winners built empires on those achievements.
> So yes, I agree that at some point you need to launch the thing.
This comment sounds an awful lot like you think the genocide of indigenous peoples is justified by the fact that the winners built empires, but I'd like to assume you intended to say something better. If you did intend to say something better, please clarify.
If the fact that entire nations were murdered is a "red herring" to you, you have no business talking about colonialism. That's not a distraction, it's the headline.
> But at some point, one needs to launch the thing, despite the complains.
Or: at some point, one decides to launch the thing.
You are reducing the complaints of an engineer as something inevitable and unimportant, as if it happened in every lunch, and in every lunch someone decided to went ahead, because it was what was needed.
What makes you say it "could have gone right"? From what came out about the o-rings behavior at cold temperatures, it seems they were taking a pretty big risk. Your perspective seems to be that it's always a coin toss no matter what, and I don't think that is true. Were there engineers speaking up in this way at every successful launch too?
Actually, had it been winder that day it might have gone right.
There were 8 joints. Only one failed, and only in one place. The spot being supercooled by boiloff from the LOX tank. And the leak self-sealed (there's aluminum in the fuel--hot exhaust touching cold metal deposited some of it) when it happened--but the seal wasn't robust enough and eventually shook itself apart.
I think what they were saying, especially given the phrasing “How many projects luckily succeeded after a reckless decision?” is that, if things hadn’t failed we would never have known and thus how many other failures of procedure/ ethics have we just not seen because the worst case failed to occur.
Can't we apply the same logic to the current Starliner situation. There's no way it should have launched, but someone brow beat others into saying it was an acceptable risk with the known issues to go ahead with the launch. Okay, so the launch was successful, but other issues that were known and suspect then caused problems after launch to the point they are not positive it can return. So, should it have launched? Luckily, at least to this point, nobody has been hurt/killed, and the vehicle is somewhat still intact.
There are mitigations (of a sort) for the Starliner. It probably should not have launched, but now that it has, the flight crew is no longer in danger and can be brought down via Crew Dragon if necessary (as if Boeing needs any more embarrassment). If I was NASA, I'd take that option; though actual danger to the astronauts coming down in the Starliner seems minimal, having SpaceX do the job just seems safer.
As it is, NASA is keeping the Starline in orbit to learn as much as possible about what's going on with the helium leaks, which are in the service module, which won't be coming back to earth for examination.
you need to establish which complaints can delay a launch. The parent comment is arguing that you need to set some kind of threshold on that. In practice, airplanes fly a little bit broken all the time. We have excellent data and theory and failsafes which allow that to be the case, but it's written in blood.
That is a very uncharitable thing to say unless you have proof.
What was the public sentiment of the Shuttle at the time?
What was Congress sentiment?
Was there organizational fear in NASA that the program would be cancelled if launches were not timely?
Hard disagree. The idea that the machinery your life will depend on might be made with half-assed safety in mind is definitely not part of the deal.
Astronauts (and anyone intelligent who intentionally puts themselves in a life-threatening situation) have a more nuanced understanding of risk than can be represented by a single % risk of death number. "I'm going to space with the best technology humanity has to offer keeping me safe" is a very different risk proposition from "I'm going to space in a ship with known high-risk safety issues".
> the best technology humanity has to offer keeping me safe
Nobody can afford the best technology humanity has to offer. As one adds more 9's to the odds of success, the cost increases exponentially. There is no end to it.
True, but that's semantics at best--as the other post said, if something is better but humans can't afford it, then it's better than humanity has to offer. In the context of this conversation, there were mitigations which was very much within what could be afforded: wait for warmer temperatures, spend some money on testing instead of stock buybacks.
The incessant "won't someone think of the downtrodden rich and powerful" attitude is tiring.
There is not a systemic problem with people paying too much for safety in the US. In every case where a law doesn't apply, the funders are the ones with the final say in whether safety measures get funded, and as such all the incentives are for too little money spent on safety. The few cases where laws obligate employers to spend money on safety, are laws written in blood because employers prioritized profits over workers' lives.
In short, your concern is completely misplaced. I mean, can you point out a single example in history where a company, went bankrupt because they spent too much money on keeping their workers safe? This isn't a problem that exists.
Which is why I set the bar so low. One real world example. I'll be happy to provide, say, 50 examples of companies cutting safety costs resulting in people dying for every example of a company going bankrupt because they actually gave a shit about the safety of their workers.
If you don't know why companies are going bankrupt, then you don't know that they're going bankrupt due to safety spending. So that's basically admitting your opinion isn't based in any evidence, no?
Companies going bankrupt has nothing to do with my opinion. That's your thing. My opinion is that "the best humanity has to offer" is practically unachievable. I can show 50 examples of human output that are suboptimal. Can you show even one example that could not be improved? If not, assertions about the best humanity has to offer aren't based on evidence, are they?
Cool man, you win. I used an idiom and the literal meaning of it wasn't true. You caught me. Good job!
I cannot think of a more boring thing to debate. But I'm sure you'll be eager to tell me that in fact I can think of more boring things to debate, since it's so important to you that superlatives be backed up with hard evidence.
"The best humanity has to offer" seems like a slippery concept. If something goes wrong in retrospect, you can always find a reason that it wasn't the "best". How would you determine if a thing X is the best? How do you know the best is a very different thing from a "high risk" scenario?
That phrasing wasn't meant to be taken literally. It's an American expression.
"The best humanity has to offer" just means that people put in a good faith effort to obtain the best that they were capable of obtaining given the resources they had. It's a fuzzy concept because there aren't necessarily objective measures of good, but I think we can agree that, for example, Boeing isn't creating the best products humanity has to offer at the moment, because they have a recent history of obvious problems being ignored.
> How do you know the best is a very different thing from a "high risk" scenario?
Going to space is inherently a high risk scenario.
As for whether what you have is the best you can have: you hire subject experts and listen to them. In the case of Challenger, the subject experts said that the launch should be delayed for warmer temperatures--the best humanity had to offer in that case was delaying the launch for warmer temperatures.
Without links to more information on these engineering decisions, I don't think I'm qualified to evaluate whether these are serious risks, and I don't believe you are either. I tend to listen to engineers.
Destin (from Smarter Every Day Youtube channel fame) has concerns about the next NASA mission to the moon (named Artemis): https://youtu.be/OoJsPvmFixU
Read the comments (especially from NASA engineers). It's pretty interesting that sometimes it takes courageous engineers to break the spell that poor managers can have on an organization.
I've always thought the same, that something like space travel is inherently incredibly dangerous. I mean surely someone during the Apollo program spoke out about something. Like landing on the moon with an untested engine being the only way back for instance.
Nixon even had a 'if they died' speech prepared, so someone had to put the odds of success not at 100.
I think the deal was there was already a pretty high threshold for risk. I don't know the percentage exactly but the problem was the o-ring thing put it over the threshold which should triggered a a no-go.
For example, you could say "we'll tolerate a 30% chance of loss of life on this launch" but then an engineer comes up and says "an issue we found puts the risk of loss of life at 65%". That crosses the limit and procedure means no launch. What should not happen is "well, we're going anyway" which is what happened with Challenger.
>Saying "no" is easy and safe, but at some point, one needs to say "yes" and take risks, otherwise nothing would be done.
True, but that is for cases where you take the risk yourself. If the challenger crew knew the risk and were - fuck it - it's worth it it would have been different than a bureaucrat chasing a promotion.
Especially when that bureaucrat probably suffered no consequences for making the wrong call. Essentially letting other people take all of the risk while accepting none. No demotion, no firing, and even if they did get fired they probably got some kind of comfy pension or whatever
I doubt in a bureaucracy as big and political as NASA saying "no" is never easy or safe. In an alternate timeline (one where the Challenger launch succeeded) it would have been interesting to track McDonald's career after refusing to sign.
That's the thing I always wonder about these things.
It's fun and easy to provide visibility into whoever called out an issue early when it does go on to cause a big failure. It gives a nice smug feeling to whoever called it out internally, the reporters who report it, and the readers in the general public who read the resulting story.
The actual important thing that we hardly ever get much visibility into is - how many potential failures were called out by how many people how many times. How many of those things went on to cause a big, or even small, failure, and how many were nothingburgers in the end. Without that, it's hard to say whether leaders were appropriately downplaying "chicken little" warnings to satisfy a market or political need, and got caught by one actually being a big deal, or whether they really did recklessly ignore a called-out legitimate risk. It's easy to say you should take everything seriously and over-analyze everything, but at some point you have to make a move, or you lose. You don't get nearly as much second-guessing when you spend too much time analyzing phantom risks and end up losing to your competitors.
> The actual important thing that we hardly ever get much visibility into is - how many potential failures were called out by how many people how many times.
I'm not sure that's important at all. Every issue raised needs to be evaluated independently. If there is strong evidence that a critical part of a space shuttle is going to fail there should be zero discussion about how many times in the past other people thought other things might go wrong when in the end nothing did. What matters is the likelihood that this current thing will cause a disaster this time based on the current evidence, not on historical statistics
The point where you "have to make a move" should only come after you can be reasonably sure that you aren't needlessly sending people to their deaths.
The launch could have gone right, and no one would have known anything about the decision process besides a few insiders. I am sure that on project as complex and as risky as a Space Shuttle, there is always an engineer that is not satisfied with some aspect, for some valid reason. But at some point, one needs to launch the thing, despite the complains. How many projects luckily succeeded after a reckless decision?
In many accidents, we can point at an engineer who foreshadowed it, as it is the case here. Usually followed by blaming those who proceeded anyways. But these decision makers are in a difficult position. Saying "no" is easy and safe, but at some point, one needs to say "yes" and take risks, otherwise nothing would be done. So, whose "no" to ignore? Not Allan's apparently.