When self-driving cars take over, they’re going to have to make some very human decisions. And we’re going to have to code those morals into the software. How do we prioritize one life over another? As a start, “millions of people in 233 countries weighed in on whose lives self-driving cars should prioritize, revealing how much ethics diverge across cultures.” The scenarios were simple, but the choices were difficult. “Should a self-driving car prioritize humans over pets, passengers over pedestrians, more lives over fewer, women over men, young over old, fit over sickly, higher social status over lower, law-abiders over law-benders?” MIT Tech Review: A global ethics study aims to help AI solve the self-driving trolley problem. Key opinions varied by country. (You may want to consider spending your sunset years in the land of the rising sun…)

+ From Nature, here’s a more complete write-up of this very interesting set of issues: The Moral Machine Experiment. This is one of those times I really wish I had learned how to code. I have a feeling self-driving cars will be designed to swerve into a crowd of humanities majors to save one engineer…