Anthropic Your Battles
“The Pentagon’s version of Claude could not be used to facilitate the mass surveillance of Americans, nor could it be used in fully autonomous weaponry—situations where computers, rather than humans, make the final decision about whom to kill. According to a source familiar with this week’s meeting, Hegseth made clear that if Anthropic did not eliminate those two guardrails by Friday afternoon, two things could happen: The Department of Defense could use the Defense Production Act, a Cold War–era law, to essentially commandeer a more permissive iteration of the AI, or it could label Anthropic a ‘supply-chain risk,’ meaning that anyone doing business with the U.S. military would be forbidden from associating with the company.” Anthropic is refusing to bend. The Atlantic (Gift Article): Anthropic Takes a Stand.
+ “The danger is not that Silicon Valley will wield too much power over the military. It is that neither will fully understand the systems it is rushing to deploy—and that the consequences of that ignorance will be tested not in a laboratory, but on the world.” Thomas Wright: The Real Reason Anthropic Wants Guardrails. “AI is too powerful and too new to be set free from human oversight.” (And that’s even considering that human insight can look like this: Pentagon Fires Another Laser at a Drone, Prompting a New Air Closure.)
+ Anthropic might not be the only holdout. OpenAI CEO Sam Altman shares Anthropic’s concerns when it comes to working with the Pentagon.
+ You can be sure not every AI CEO will be so careful. WSJ (Gift Article): Government Agencies Raise Alarm About Use of Elon Musk’s Grok Chatbot. “Warnings about xAI’s safety and reliability preceded Pentagon decision to approve Grok for use in classified settings.”


