Back to Insights

AI • Geopolitics • Ethics

The Pentagon, Anthropic, and the final red lines of AI.

FG
Felix Ghauri

· 3 min read

Pentagon Anthropic AI Safeguards

The Pentagon used Claude to help capture a dictator.

During the raid on Maduro in Caracas, Anthropic’s AI was running through Palantir’s military systems, parsing intelligence and providing strategic options.

Anthropic asked whether its AI had been used. The Pentagon reportedly read that as disloyalty and briefly discussed cutting Anthropic out of future contracts.

Anthropic’s two red lines: No mass surveillance of American citizens; No fully autonomous weapons.

OpenAI, Google and xAI have reportedly agreed to drop their safeguards to win defense contracts. Anthropic is the last holdout.

We are watching a real-time stress test of corporate ethics against national security imperatives. When the US government tells a private company that basic ethical guardrails are a national security risk, who blinks first?

💬 Join the conversation on LinkedIn

View on LinkedIn →
FG

Felix Ghauri

Applied AI Practitioner · Founder, Futures Forum

Felix helps organisations navigate AI and exponential change. He writes about technology, geopolitics, and the future of work.

Thinking about AI in your workflow?

Let's discuss what might work for you.

Let's Talk