Radiolab: Driverless Dilemma

by on under podcast-summary
3 minute read

Radiolab: Driverless Dilemma

This episode opens with Nick Bilton’s obsessed concern about automation, AI, and driverless cars and their effects on society, and then shifts instead to an interesting take on the Trolley Problem. They replay a decade old episode in which neuroscientists/applied philosophers put people into fMRI machines and pose variants of the classic trolley problem to see how the brain processes these moral quandaries. They then fast-forward to the present and discuss the relevance of this case after a Mercedes-Benz exec admitted last year that their AI should prioritize the safety of its occupant.

Things to Note:

Framing Matters in Decision-making

The fMRI experiments explain an apparent paradox that has been explored through the variations of the Trolley Problem: why do people make different decisions when equivalent outcomes are framed differently (pulling a lever vs. pushing a fat man)? The fMRI introduces a pragmatic element that was (apparently) absent from moral philosophy: human biology has a role in determining how we make (moral) decisions (shocking). By demonstrating that different parts of the brain are involved in decisions around the different variants of the problem, these studies explain the apparent paradox.

After the show links this research on the Trolley Problem to autonomous vehicles via the Mercedes Benz executive’s pronouncement, they interview people on the street to discover an interesting policy problem. People agree that they are in favour of a socially optimal AV AI, one that would seek to minimize loss of human life, but they would not drive in a car that would put someone else’s safety above theirs. While the science explains where this dilemma comes from, it doesn’t yet appear to prescribe solutions to a fundamental marketing problem for autonomous cars. Radiolab does, however, raise further dystopian questions about whether cars can estimate different values of human life in order to prioritize who to save.

My Thoughts

The Trolley Problem Is Tired

I’ve tweeted a couple of complaints about the excess attention the Trolley Problem is getting in relation to driverless cars. They can be summed up by Mr. Solo:

via GIPHY

Or, if you would rather read an actual professor of electrical engineering explain our frustrations:

To expand on that further, the Trolley Problem assumes the decision-maker has perfect information about the situation. The philosopher knows exactly how many people will die if they don’t pull the lever and knows exactly the consequences of pushing the fat man onto the tracks. In real-life, even in the very rare circumstance that such a problem existed, the decision-maker would likely be operating with imperfect information and would therefore have to take into account uncertainty in their decision. Ditto for the autonomous vehicle: it is unlikely that its set of sensors would have captured such perfect information, nor is it likely that its AI would ever explicitly be deciding between saving the occupant or running over a family. It is open season, however, whether autonomous AI’s would implicitly make that decision, as the Benz executive implies with his “protect the occupant” imperative.

Finally, props for the Radiolab team for, eventually, highlighting that this problem is getting more attention than it deserves because:

  1. it would be likely be a small fraction of fatalities caused by autonomous vehicles, and
  2. autonomous vehicles should be safer than human drivers by a 1000x factor.
autonomous-vehicles, radiolab, trolley-problem