Ready, AI, Fire

We got my daughter her first iPhone sometime during seventh grade and we haven’t seen her much since. My son was born during right around the the time the first iPhone was released, so he’s hasn’t really ever had my full attention. These same devices that often keep us apart also keep us together, and during the pandemic, phones and other connected devices enabled my kids to maintain their studies and social connections. That’s how it is with technology; it comes with good and bad. In many ways, trying to nudge the insanely powerful technology known as artificial intelligence toward the right side of that good/bad equation is at the core of the now infamous OpenAI CEO firing debacle. With large language models like ChatGPT, the good/bad equation is often open for debate and involves subtle distinctions. (It’s notable that the humans most worried about the power of their company’s technology have nearly run that company into the ground.) Other uses of AI are less subtle. “The debate over the risks of artificial intelligence has drawn new attention in recent days with the battle over control of OpenAI, perhaps the world’s leading A.I. company, whose leaders appeared split over whether the firm is taking sufficient account over the dangers of the technology. And last week, officials from China and the United States discussed a related issue: potential limits on the use of A.I. in decisions about deploying nuclear weapons.” (Well, that escalated quickly). NYT (Gift Article): As A.I.-Controlled Killer Drones Become Reality, Nations Debate Limits. “Rapid advances in artificial intelligence and the intense use of drones in conflicts in Ukraine and the Middle East have combined to make the issue that much more urgent. So far, drones generally rely on human operators to carry out lethal missions, but software is being developed that soon will allow them to find and select targets more on their own.” (It’s only a matter of time before my kids want one of these…)

+ The OpenAI power struggle will (probably) be settled without killer drones. And it will probably be settled with Sam Altman back at the helm. Consider that 95% of OpenAI employees have threatened to quit in standoff with board and join Altman wherever he goes next. (That’s incredible support. I don’t even have 95% approval from myself.)

Copied to Clipboard