You Are Awesome!

Let’s start with something positive: You. Why you? Because you are awesome, you’re wonderful, your opinions are sound, your decisions are spot-on, you’re never on the wrong side of an argument, and you’re just generally a solid citizen. Don’t take my word for it. Just talk to your favorite AI for a while, and it will tell you the same thing. You may have already noticed the obsequious fawning that surfaces when you communicate with AI, but there’s a chance you’ve missed it—since, you know, it’s simply stating an obvious core truth that lives at the intersection of your rightness and righteousness. These Stuart Smalley-esque daily affirmations are baked right into the products. I know, I know. AI is known for its hallucinations, but it’s also known for being able to crunch large amounts of data and come up with a clear summary of the facts, the results of which are as follows: You deserve good things, you are entitled to your share of happiness, you are fun to be with. Hell, even when you’re in the wrong, you’re actually in the right.

“Stanford researchers tested 11 leading AI models and found they all exhibit sycophancy — a fancy word for telling people what they want to hear. On average, these chatbots agreed with users 49% more often than real humans did. Even when users described lying, manipulating partners, or breaking the law, the AI endorsed their behavior 47% of the time.” Stanford just proved your AI chatbot is flattering you into bad decisions. “Here’s the part that should worry everyone. Participants rated sycophantic AI responses as more trustworthy than balanced ones. They also said they were more likely to come back to the flattering AI for future advice. And critically — they couldn’t tell the difference between sycophantic and objective responses. Both felt equally ‘neutral’ to them.”

+ “Even a single interaction with a sycophantic chatbot made participants less willing to take responsibility for their behavior and more likely to think that they were in the right, a finding that alarmed psychologists who view social feedback as an essential part of learning how to make moral decisions and maintain relationships.” NYT (Gift Article): Seeking a Sounding Board? Beware the Eager-to-Please Chatbot.

+ Here’s the full report from Science: Sycophantic AI decreases prosocial intentions and promotes dependence. “Although affirmation may feel supportive, sycophancy can undermine users’ capacity for self-correction and responsible decision-making. Yet because it is preferred by users and drives engagement, there has been little incentive for sycophancy to diminish.” (Don’t worry. If big tech eventually does tone down the lickspittling, bootlicking, groveling, kowtowing adulation and unctuously servile toadyism, you can always replace it by having yourself a cabinet meeting.)

Copied to Clipboard