The Pentagon used Claude to help capture a dictator.
During the raid on Maduro in Caracas, Anthropic’s AI was running through Palantir’s military systems. There was kinetic fire. People were shot.
Anthropic asked whether its AI had been used. The Pentagon reportedly read that as disloyalty.
Now the Defence Department is threatening to designate Anthropic a “supply chain risk.” A designation typically reserved for foreign adversaries.
The demand is an “all lawful purposes” contract. No carve-outs.
Anthropic’s two red lines: → No mass surveillance of American citizens → No fully autonomous weapons
OpenAI, Google and xAI have reportedly agreed to drop their safeguards. Anthropic is the last holdout.
Claude is the only AI model currently running on the military’s classified systems. The contract is worth $200 million. Anthropic makes $14 billion a year.
The money is not the point.
Last week at the Munich Security Conference, Chancellor Merz told the room that “the international order based on rights and rules no longer exists.”
AI safety policies were designed for a settled order. The Pentagon is writing new rules.
The rules-based order appears to be coming apart. Rules-based AI may be going with it.
— Sources and links in the first comment.
♻️ Repost if your network should be watching this. 🔔 Follow Fas Felix Ghauri for future signals, not noise: AI and what changes next.