The Answering Machine
We often complain about human biases and errors when other people make decisions that have an impact on our lives. So the idea of letting less error-prone, unbiased, code-powered algorithms take over has a certain allure. But the machine is not your friend. The code was written by humans. And when mistakes are made, there’s no one to complain to. In Quartz, Rachel O’Dwyer explains how algorithms are making the same mistakes assessing credit scores that humans did a century ago.
+ “Artificial intelligence may have cracked the code on certain tasks that typically require human smarts, but in order to learn, these algorithms need vast quantities of data that humans have produced. They hoover up that information, rummage around in search of commonalities and correlations, and then offer a classification or prediction (whether that lesion is cancerous, whether you’ll default on your loan) based on the patterns they detect. Yet they’re only as clever as the data they’re trained on, which means that our limitations—our biases, our blind spots, our inattention—become theirs as well.” Danielle Groen in The Walrus: How We Made AI As Racist and Sexist As Humans. (There are certain humans who still manage to maintain an edge…)
+ The Atlantic: The Future of AI Depends on High-School Girls.