An old Radiolab about the Trolley Problem and they did brain scans and stuff, and it proves that people have an innate morality! they're unwilling to kill a person with their bare hands, because it lights up a different part of the brain! they're no longer doing a rational calculus, they're using instincts honed over millennia!
My counterhypothesis to basically all trolley probleming: look, the reason that people will flip a lever but not push a fat guy off a bridge is about certainty. If you somehow knew for sure that the falling fat guy would divert the train, and if it were common knowledge that the fat guy would divert the train, wayyy more people would go for it. But you give me those two scenarios, and in the second one I've got to make a bet that the fat guy will divert the train and I'm not just killing a guy for nothing. Even if you tell me, yeah, the fat guy will definitely block the train, it doesn't feel like it, because I've never experienced a situation where I could know for sure that this fat guy that I'm killing will divert a train.
Here's another thought experiment: I've got a dish of wonderful ice cream! Would you eat it? Sure.
Ok, now I've got a dish of wonderful ice cream that looks like dog turds! I promise it's delicious, really! Would you eat it? We have no prior experience for this kind of thing - I've never seen a dog turd that actually turned out to be ice cream. Plus, the risk is asymmetric. If you don't eat the ice cream, you lose on just eating ice cream; if you do eat it and it turns to be that 1% of the time where I'm a liar, ugh.
(The people running these studies have probably considered and accounted for this. At least, I hope so.)
Note, though, that all our hand-wringing about the trolley problem, especially as it relates to self-driving cars, is likely a waste of time. Nobody's gonna program in "save the passengers first!" or "be utilitarian!" - the car's going to decide based on whatever combination of 10,000 algorithms it's got built in, and we've got to hope that it does the right thing. And the right thing will 99.99999% of the time be "slam on the brakes."
My counterhypothesis to basically all trolley probleming: look, the reason that people will flip a lever but not push a fat guy off a bridge is about certainty. If you somehow knew for sure that the falling fat guy would divert the train, and if it were common knowledge that the fat guy would divert the train, wayyy more people would go for it. But you give me those two scenarios, and in the second one I've got to make a bet that the fat guy will divert the train and I'm not just killing a guy for nothing. Even if you tell me, yeah, the fat guy will definitely block the train, it doesn't feel like it, because I've never experienced a situation where I could know for sure that this fat guy that I'm killing will divert a train.
Here's another thought experiment: I've got a dish of wonderful ice cream! Would you eat it? Sure.
Ok, now I've got a dish of wonderful ice cream that looks like dog turds! I promise it's delicious, really! Would you eat it? We have no prior experience for this kind of thing - I've never seen a dog turd that actually turned out to be ice cream. Plus, the risk is asymmetric. If you don't eat the ice cream, you lose on just eating ice cream; if you do eat it and it turns to be that 1% of the time where I'm a liar, ugh.
(The people running these studies have probably considered and accounted for this. At least, I hope so.)
Note, though, that all our hand-wringing about the trolley problem, especially as it relates to self-driving cars, is likely a waste of time. Nobody's gonna program in "save the passengers first!" or "be utilitarian!" - the car's going to decide based on whatever combination of 10,000 algorithms it's got built in, and we've got to hope that it does the right thing. And the right thing will 99.99999% of the time be "slam on the brakes."
No comments:
Post a Comment