Technology often has an upside and a downside. During the pandemic year, social technologies kept some semblance of my kids’ social and academic lives relatively on-track. Those same social technologies also greased the skids for conspiracy theories and deadly health misinformation, and pushed American democracy to the brink. And so it is with artificial intelligence. On one hand, the technology is used to simplify business tasks and make vital health care decisions. On the other hand, some AI intended to help can actually hurt. Consider the AI used in Allegheney County to determine which families should be investigated for potential child neglect or abuse. “According to new research from a Carnegie Mellon University team obtained exclusively by AP, Allegheny’s algorithm in its first years of operation showed a pattern of flagging a disproportionate number of Black children for a ‘mandatory’ neglect investigation, when compared with white children. The independent researchers, who received data from the county, also found that social workers disagreed with the risk scores the algorithm produced about one-third of the time.” An algorithm that screens for child neglect raises concerns. What we really need is an AI that tells us when it’s appropriate to use AI.

Copied to Clipboard