Let There Be Light
The comedian Steven Wright has an old joke in which he describes a light switch in his house that does nothing. Every now and then, he would just flick it up and down. As he explains, “About a month later, I got a letter from a woman in Germany … saying, ‘Cut it out.'” I probably owe that same woman an apology, as I spent much of the last week hopelessly flicking many of the light switches in my house after suffering a system-wide software glitch. (Yes, my house features a layer of software between the switches and the lights because just turning them on and off seemed too efficient.) Faced with this challenge, I did what every modern, skill-free homeowner would do. I talked to ChatGPT. The responses were incredibly detailed, incredibly certain, and incredibly supportive of my efforts. I was doing all the right things, and there was no reason for me to get discouraged while dealing with a notoriously buggy software platform. It was the first time I attempted home improvement without anyone laughing. After I got the feedback from ChatGPT, I decided to check its work with Gemini. The advice about the next moves was largely confirmed. Both chat programs offered to summarize our discussion for my lighting contractor, in case I wanted to call in a professional. His human response went something like this: “None of that text from ChatGPT makes any sense at all. Be cautious asking it questions about that sort of thing. The answers it gives people are just ridiculous.” I took the human’s advice, which involved pushing one button for about ten seconds. And there was light, and it was good. While chat programs are at times amazing, I’ve realized through a series of exchanges that the programs are often simultaneously very certain and completely wrong. I think we’ve reached the Singularity, because that’s exactly how humans behave on the internet.
The experience got me wondering if the masters of the AI revolution might also be wrong on topics about which they have great certainty (and every incentive to hope things evolve as they say they will). Tim Higgins in the WSJ (Gift Article), with an interesting look at how some of these folks view the changing world: Why the tech world thinks the American dream is dying. “History is filled with technology booms that create new winners and losers. AI optimists like to point out that a rising tide has tended to lift all boats. What’s being talked about now—massive job loss to automation and the need for public safety nets, in the form of universal basic income—paints a dramatically different future. It’s still not clear there’s any appetite for so-called UBI, which runs counter to many Americans’ bedrock ideals of personal achievement. ‘I used to be really excited about UBI…but I think people really need agency; they need to feel like they have a voice in governing the future and deciding where things go,’ Altman, OpenAI’s chief executive, said last year when asked by a podcaster about how people will create wealth in the AI era. ‘If you just say, ‘OK, AI is going to do everything and then everybody gets…a dividend from that,’ it’s not going to feel good, and I don’t think it actually would be good for people.'” Meanwhile, Elon Musk explains, “The transition will be bumpy. We’ll have radical change, social unrest and immense prosperity.” Maybe so, but maybe people who are really good at making a lot of money in tech aren’t necessarily really good when it comes to analyzing human desire and interactions. There’s no doubt this is an epic tech boom, and there will be serious changes ahead. But are we picking the right people to determine the direction and predict the future? Actually, what am I asking you for? I should be asking my lighting contractor…


