Hacker Newsnew | past | comments | ask | show | jobs | submit | dereferenceddev's commentslogin

It is very apparent that Facebook is wanting to avoid appearing to be an "editor" of content in almost any shape or form in an effort to reduce their chances of being regulated by the government.


I've rarely heard someone make the counter-point that a fetus is technically half "you", which really does fundamentally change the violinist example.

The violinist experiment does have a lot of holes it it, but I think that one in particular almost turns it on its head, particularly because it circumvents the crux of the problem around consent (which I think is at the core of the problem).

It would seem you would have to change the thought experiment to have the violinist actually be related to you (say a sibling). Now, would a person feel _as_ upset that they had to allow their sibling to use their kidneys for nine months in order to stay alive? That really changes it.


A fetus is half another person, whose "selfish genes" would benefit by the fetus taking an excessive share of resources. So there's an inherent tension that isn't determined by culture or moral theories.


There just seems to be a misunderstanding on the intention of a though experiment here. There is a subtle implication that because the thought experiment is "contrived" that its less valuable.

When thinking about philosophy, a core part of philosophy is to reduce concepts, ideas, or beliefs into abstractions (or a "spirit" or "essence" of what their intention is), the thought experiment presents itself as a perfectly useful tool to help challenge those concepts.

Ethics is not about absolutes, but really about sussing out where gray areas exist in what some might believe are black and white situations. Additionally, most of the scenarios are presented as a thought experiment, because conducting a real experiment with those conditions would be wildly _un_ethical.

How are we supposed to determine the nuance of valuing human life if we were bound to doing actual experiments? Also, who in their right mind would conduct such as experiment?

Zimbardo faced enough flack testing the limits of obedience and authority in the Standford Prison Experiement.


Why does every comment like this, criticising an article, always claim the author doesn't understand the concept they're talking about? You can just disagree with something without using cheap tactics like this to imply the author is ignorant.

I see no evidence of this claimed misunderstanding in the linked article. And there's nothing "subtle" about it saying thought experiments lack value because they're contrived, that is the central point its making.


Do you not see the irony of then taking the position that my tactics are “cheap”?

Having read the same article, my takeaway from their characterization of thought experiments was that they misunderstood their real purpose. To pose thought experiments against what real professionals would do in particular scenarios really does undermine what the point of presenting thought experiments does. They’re used to challenge our notions about the choice we make and think about how we shape our moral code.

They ultimately are intended to make you think more about why you think a particular way about something.

So that is why it seems to be misunderstood.


They have Trolley problem problem problems, and you have Trolley problem problem problem problems.


I present: The Trolley Solution. I decide what happens to the trolley in every scenario and everyone else is absolved of guilt.


Oh, i thought they Trolley Solution was that you tie all philosophers to the tracks and then run them over.


Is that your final answer?


As I see it, it's just a tool to try an unravel principles. They aren't trying to solve specific situations. Thought experiments often have counter thought experiments, like with the trolley problem, if pulling the lever seems more ethical, when you have 4 patients who will die without an organ transplant, and you have 1 patient who is healthy and organs could be harvested to save 4. It starts becoming harder to work out what the critical principles are. Just like in the article of the killing vs letting die, the assassin example is an alternative thought experiment. This is more important if you have to encode these things in laws / policy where you need a more generic guide to what is acceptable. But you wouldn't rely just on thought experiments to do this, they are just tools to help shake the "thought tree" to see what falls out.


You're totally striking on exactly how I would frame thought experiments. They're a tool that helps further a conversation and discussion, to challenge what might be a preconceived notion.

Once you shake out enough thoughts from the thought tree, you can hopefully discover what they're all pointing to in order to find what might _actually_ be driving an ethical decision.


It's just debugging or maybe reverse engineering of some subject or belief.


Taking ethics out of context, squashes the very life out of it. The related context is the whole ball game. Further, pretending one thing is like another (adult violinist dependent on your kidney; foetus dependent upon mother) erases several critical ethical points in an obvious attempt to argue toward a conclusion. Its disingenuous.

A simpler example: Killing a convicted murderer, is murder again! Oh, except the murderer has a whole different ethical context than say a child: the murderer is again a threat to more people; the murderer had agency and could have decided not to murder; the act has societal benefit instead of harm; and on and on.

And if you don't like my argument, then voila! Taking your ethical dilemma and recasting it as mine seems "not fair!" to you. Which ironically is my point and QED.


I don't think you're allowed to sneak "when I kill it has societal benefit" into the "context".


Nowhere in that statement was "I" stated or implied. The actor can be anything you want - a group, the state as an apparatus, a quantum-triggered death box, etc.


> philosophy is to reduce concepts, ideas, or beliefs into abstractions

This in itself can be see as problematic, as one can say that trying to reduce "concepts, ideas, or beliefs into abstractions" is futile, as those "things" are not black and white, they do not conform to Aristotelian logic of one thing/concept being either "true" of "false", there's a "continuum" (for lack of a better word) when trying to define each and everyone of those terms.

In other words one cannot be moral or not-moral (to go back to the trolley problem), there's a moral "continuum" which even varies with time (I know I'm less "moral" before I had the chance to drink my coffee early in the morning).


But even on the continuum, there will be usually areas. If you consider the trolley problem and you're dealing with a threshold deontologist, having zero persons on the track will likely result ("I don't throw the guy over the bridget", hopefully) in a different answer then having one person on the track, and at some point the answer will shift again. We'll only need to reduce the problem to find the points of change (or areas, as few people will probably say "well, at 99? no, but for a 100 lives, yeah, I'll throw him over"), we don't need to keep a 1:1 mapping of all possible input values with the output values.

To use things, we usually reduce the single parts and use abstractions to deal with what's important (to us in that context). You don't deal with individual atoms, you look at clusters and clusters of clusters etc and understand and use them alltogether as a pen, just like you've similar pens before, even though they were 100% completely different if you look close enough.


I agree, but also disagree, because many people take this as a literal problem. People think "how can we have self driving cars until they can solve the trolley problem in a way I find agreeable?" but that's not really how the world works. Just try not to hit people. And not just practically in the sense that literal trolley problems never occur. If you find yourself in a situation of needing to choose the lessor of two evils, you're probably just best off believing that absolute morality doesn't exist and you're just going off your gut because that's how you, as an individual human, work.


I mostly agree. And it's certainly way overblown in the context of self-driving.

On the other hand, we can imagine at some point, there may be some broad principles that engineers will have to consider when programming responses to an impending accident. Perhaps slamming on the brakes is essentially always the optimum solution from the point of driver safety if several people unexpectedly appear in front of the car even though braking will be insufficient to avoid hitting them albeit at a reduced speed. On the other hand, there's an unquantifiable but known risk (to the driver) to swerving in order to avoid the otherwise certain collision.

That a human would pick one or the other approach in a split second without really having time to think about it doesn't really inform what the programmed response of a vehicle that does have time to deliberately pick from several options should be.

Added: It is, of course, pretty much an academic exercise today in that cars can't reliably even be prevented from running into highway dividers. But, in a few decades or whenever, one can imagine being at a point where the technology is good enough that some framework is needed for making rules. Maximizing driver safety at all costs vs. trying to do the least harm overall may well not result in the same rules being programmed.


I don't think the author's objection is to thought experiments in principle but rather, to the way they are composed and used.

After all, a thought experiment that only holds true given an impossible premise would have no rhetorical power if we weren't willing to extrapolate our conclusions beyond the premise, to the possible. Which we are - that's why the trolley problem is taught to people who aren't railway signalmen.

There's a risk there are arguments we think are convincing us because they have isolated a pure abstraction - but actually it's the false dilemma in the premise that convinced us, while the argument distracted us.


I believe that's why you don't use the most extreme thought experiment and then go "okay, thanks, we're done here". You want to be able to zero in on different factors. E.g. save the musician by giving five minutes instead of nine months? Lower the risk of death of a part of society by accepting some restrictions on your freedoms?

I view it like a black box where you can define inputs and observe the output and your goal is to understand what calculations are made inside. Thought experiments are essentially fuzzing it to understand what's happening. "What if it's a fat guy? What if it's a smoker? What if it's a politician? What if it's a homeless person? What if there are 10 people, what if there are a million?" and you're iterating over lots of options to get closer to understanding the formula.


Instead of conceptual people you could experiment with real fish or chickens.


If you train a chicken to pull a trolley lever, does it weigh on the conscious of the chicken whether they killed 1 chicken vs 5 chickens?

I'm not a chicken so I don't really know.


If one could train a chicken to pull or not pull a lever to save x amount of chicken lives, does it weigh on the conscience of the trainer that they’ve compelled a self-aware creature to kill one of its own kind?

(I’m not asking this to refute your point, it’s well met, I just found the opportunity to poke at the imaginary ethical elephant too much to resist, and no I’m not proud of it-those were good hens)


He could simply argue it was all dereferenceddev's idea.


Touche


I think he meant with humans pulling the lever and chickens dying.


I was trying to be silly in pointing out the real reason why the trolley car experiment is supposed to have some challenge to it.


There are animals people have deep affection for and you’ll find great resistance to these experiments, never the less, I think you can get valid data from experimenting on “lower” animals.


The whole point of using a thought experiment is to gain intuitions about the "but what if it wasn't a person, but a chicken instead? Would that be equivalent? Why?" Ie, the value of the experiment is in the experiential learning of the attempt to arrive at the answer. At least, imho

Sure, you can do things that you consider evil just to see how you would really react to being presented with such an unpleasant binary choice, but if you chose to do it, was it in fact not evil to you, are you just pathological, or are you perhaps in toddlerhood, where you don't yet really have an intuition for your own relationship with causality, such as it is?


I think the ethics of humans dying vs delicious chickens dying are pretty incomparable.


Of course, but I think you can still surface those ethics without people as the experimental subject. Get pacifists, vegans, what have you.


What if the human is hungry?


That said the experiment does deny the always precent uncertainty (or hope, depends how you call it). Arguably the most ethical course of action would be to do everything to prevent any deaths. This might be futile, but it's impossible to know at the time.


If you want to maximize ethical utility, pick the course which reduces the greatest amount of suffering. Prognosticators not required.


It’s impossible to know the net results after hundreds of years. The assassination of archduke ferdinand, might be a net positive for people currently alive today or a net negative, but it really seemed like a bad thing for the next several decades.

Essentially, people can’t make judgements with total information.


No.


Yes.


Justifying murder, no.


I am saying, it’s such a large change we can’t tell if it currently benefiting us. It’s one of those events that had such a large impact it’s impossible to say what the world would be like without it.

It seems like it was responsible for WWI, WWII, the Spanish flu, etc but would worse things happened instead? I doubt it, but maybe it indirectly prevented total nuclear war.

Which when you get down to it is true of everything, we need to act with imperfect information.


It’s literally a non-argument designed to not even consider my position and talk over it with flights of fancy. Every decision is based on incomplete information, ever. Your argument is an argument against the very state of knowing. It’s Descarte but with even less understanding of a causal system.

Your argument: we can’t possibly know everything, therefore assume nothing is correct. Except reality is a persistent illusion, so we work in that reality. Your fantasy world where there is a god that can see all futures is fantasy. Don’t confuse the imagination with reality.


> Whom? Well, the greatest number of people possible. All time frames. All scopes. You philosophers always trying to confuse the simple.

Your philosophy is literally dependent on information that’s impossible to know which is a real limitation. In the real world we can estimate harms and benefits, but it’s the unknown factor which makes sacrificing someone ‘for the greater good’ so abhorrent.

If the rule is to minimize total probable harm, then you need to consider second order effects of choices. You further need to carefully consider/model repercussions rather than assuming each choice has obvious costs. Simply shooting infected people may reduce the risks of spreading a plague in the short term, but it’s got huge long term issues as people then try and hide their infection etc.


You literally think you are making a coherent argument that doesn’t collapse in on itself?


> pick the course which reduces the greatest amount of suffering

Reduces for who? Over what time frame? What's the scope, and why that scope and not a different scope?


Whom? Well, the greatest number of people possible. All time frames. All scopes. You philosophers always trying to confuse the simple.


Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: