And Doggone It, People Like Me!
“What I see in these stories are fragments of a larger problem that will be with us for years, and maybe decades. I don’t just think about the vulnerable adults who can be lured into chats that inflate their delusions. I also think about today’s children, including my daughter, who will grow up around friendly AI conversationalists that they’ll turn to for finishing their homework, drafting texts to girls and boys in high school, resolving fights with their parents, working out ethical challenges, and managing the hormonal circus of being a teenager. On the receiving end of these articulated fears may be not only messy, flawed, distracted friends, but also the articulate, always-online, and highly practiced you-are-so-right reassurance of a disembodied bot that excels in flattery.” Derek Thompson on The Looming Social Crisis of AI Friends and Chatbot Therapists, or How to Manufacture Narcissism at Scale… (As if humans haven’t been good enough at doing that on their own…)
+ “The follies began when lawyers—including some at prestigious firms—submitted documents citing cases that didn’t exist. Similar mistakes soon spread to other roles in the courts. In December, a Stanford professor submitted sworn testimony containing hallucinations and errors in a case about deepfakes, despite being an expert on AI and misinformation himself. The buck stopped with judges, who—whether they or opposing counsel caught the mistakes—issued reprimands and fines, and likely left attorneys embarrassed enough to think twice before trusting AI again. But now judges are experimenting with generative AI too.”


