Behavioural science studies the theory and practice of what humans do. With advances in Artificial Intelligence (AI) making huge inroads into human choices and activity, theories like the Trolley Problem are now being thought about in a practical sense.
The Trolley Problem is a classic thought experiment on ethics. If you saw a trolley (or tram or train) heading towards five people, would you let it kill them or flick a switch to send the trolley off on another track which would kill only one person? In other words, would you take action to kill one person and save five lives, or do nothing to let tragedy take its course?
While most of us see this is a maths question, we might soon be building driverless cars that have to make that call. There are driverless trains carrying coal, driverless tractors working the fields and the US is looking at driverless trucks, so it’s a technology we use and trust. The trick will be working out how not to build our own biases into the codes that run the software.
It’s popped up on two recent media – Radiolabs’ podcast ‘Driverless Dilemma’ and the October 2 episode of Q&A. Sandra Peter (Director of Sydney Business Insights) explained the AI challenge on the Q&A panel, “We don’t train them to be biased, but they’re modelled on the real world, they creep into how we get our loans, they creep into who gets a job, who gets paroled, who gets to go to gaol…and there’s no easy way to fix them.”
The ‘fix’, whatever it is, will require behavioural science to make sure our biases are not creeping into decision-making algorithms.
The Q&A link is here, and the Radiolabs podcast link is here.
Check out our short, bespoke and Monash University accredited training programs.
We offer a broad range of research services to help governments, industries and NGOs find behavioural solutions.
We believe in building capacity and sharing knowledge through multiple channels to our partners, collaborators and the wider community.
Get monthly behaviour change content and insights