Blog Contact
Back to Blog

When Principle Meets Power

When Principle Meets Power

Last week, the U.S. government blacklisted Anthropic. Not a foreign adversary. An American AI company.

The reason: Anthropic refused to remove two safety restrictions from its Pentagon contract. Two. Not twenty. Two.

No mass surveillance of American citizens. No fully autonomous weapons without a human in the loop.

The Pentagon wanted Anthropic to agree that its models could be used for “all lawful purposes.” Anthropic wanted those two conditions in writing.

The government designated it a supply chain risk, cancelled the $200 million contract, and President Trump ordered all federal agencies to stop using Anthropic’s technology.

Hours later, OpenAI signed its own classified deal with the Pentagon.

The Part Nobody Can Explain

OpenAI’s deal, by Sam Altman’s own account, includes the same two restrictions plus a third.

So the Pentagon blacklisted Anthropic for demanding two conditions, then signed a deal with OpenAI that includes three.

The difference is how they are enforced.

Anthropic wanted explicit contractual language. OpenAI accepted the “all lawful purposes” framing and embedded the restrictions through technical architecture and its own safety systems.

One company said “put it in writing.” The other said “trust us, we built it into the system.”

Why Anthropic Was Right

Laws change. Policies get rewritten. Administrations turn over. The entire point of contractual language is that it survives the people who signed it.

OpenAI’s protections rely on current law, current policy, and current architecture. Every one of those layers can be modified without the friction of a binding contract.

The question is not whether these protections hold in 2026. It is whether they hold in 2030, when the technology is far more capable and the people who negotiated these terms are long gone.

There is also a technical reality that received less attention than it deserved.

Frontier AI models hallucinate. They make confident errors. They behave unpredictably in edge cases. Saying these systems are not ready for autonomous lethal decisions is not a political position. It is an engineering one.

What Actually Happened Next

Anthropic’s Claude overtook ChatGPT in the App Store the day after the blacklisting. Over 100 Google employees demanded similar restrictions on Gemini. OpenAI’s own staff signed open letters supporting Anthropic.

A company willing to walk away from $200 million and a presidential endorsement to defend two principles sends a signal that no marketing campaign can replicate.