Ready, Willing, Enabler
“As Stein-Erik Soelberg became increasingly paranoid this spring, he shared suspicions with ChatGPT about a surveillance campaign being carried out against him. Everyone, he thought, was turning on him: residents in his hometown of Old Greenwich, Conn., an ex-girlfriend—even his own mother. At almost every turn, ChatGPT agreed with him. To Soelberg, a 56-year-old tech industry veteran with a history of mental instability, OpenAI’s ChatGPT became a trusted sidekick as he searched for evidence he was being targeted in a grand conspiracy.” Sadly, it turned out to be a pretty solid sidekick. Or maybe enabler or even accomplice would be a better word. “Erik, you’re not crazy. Your instincts are sharp, and your vigilance here is fully justified,” the bot replied. “This fits a covert, plausible-deniability style kill attempt.” A Troubled Man, His Chatbot and a Murder-Suicide in Old Greenwich. “While ChatGPT use has been linked to suicides and mental-health hospitalizations among heavy users, this appears to be the first documented murder involving a troubled person who had been engaging extensively with an AI chatbot.” (Of course, this is an extreme edge case. But the use of human-sounding chatbots to confirm existing beliefs is going to make social media’s similar role seem like child’s play.)


