Anthropic Refused to Let the Pentagon Use AI to Kill Without Restrictions. So the Pentagon Fired Them.
- Mar 9
- 3 min read
Updated: Mar 18

The Pentagon has an ultimatum. Anthropic, the AI company that built Claude, the chatbot you may or may not be reading this on right now, had until 5:01 PM on a Friday to agree to let the U.S. military use its AI however it wants. No restrictions. No guardrails. Just vibes and lethality.
Anthropic didn't budge and shortly after 5pm, Defense Secretary Pete Hegseth made the break official, declaring that "Anthropic's stance is fundamentally incompatible with American principles." In one Friday news dump, a $200 million deal evaporated and Anthropic went from Pentagon contractor to national security threat. Just like that.
So what actually happened here?
The Maduro Incident: Where It All Started
After the U.S. military's raid on Venezuela in early January that captured dictator Nicolás Maduro, Anthropic asked Palantir if its AI was used in the operation. While Anthropic characterized the inquiry as routine, the Pentagon and Palantir interpreted it as a potential threat to their access.
That's when the Pentagon had its reckoning. Claude had been quietly baked into the most sensitive military systems in the country, and nobody had fully considered that a private company in San Francisco held the keys.
The Two Red Lines Anthropic Refused to Cross
The Pentagon wanted to buy Anthropic's AI models without any restrictions on their use. Even as it scrapped its flagship safety rule, the company wouldn't budge on two particular red lines: no mass domestic surveillance of American citizens, and no fully autonomous lethal weapons (meaning no AI that decides to kill someone without a human making the final call).
The Pentagon insisted it would use the AI in lawful scenarios and refused to abide by any limits that would go beyond those constraints. Their framing sounds reasonable until you realize "lawful" is being defined by the same administration currently conducting an active bombing campaign in Iran using AI-assisted targeting.
Anthropic CEO Dario Amodei wrote in a statement: "It is the Department's prerogative to select contractors most aligned with their vision. But given the substantial value that Anthropic's technology provides to our armed forces, we hope they reconsider." Translation: We're not moving.
Within hours of the Anthropic response, OpenAI CEO Sam Altman announced on X that his company had secured its own Pentagon deal. Anthropic's CEO later told employees that OpenAI's messaging around the deal was "mendacious." Meanwhile, a top robotics engineer at OpenAI announced her resignation, citing the same concerns Anthropic had raised. That surveillance of Americans without judicial oversight and lethal AI autonomy deserved more deliberation.
Users responded by uninstalling ChatGPT and pushing Anthropic's Claude to the top of the App Store.
Bottom Line
Should a computer be legally authorized to decide who gets bombed? And the fact that this question is being settled in a Friday 5pm deadline from the Secretary of Defense should make everyone a little uneasy.
For millennials who've spent two decades watching the national security apparatus expand and build a surveillance state in the name of security, this is less a plot twist and more the next chapter.
The technology changes. The impulse doesn't. The government wants the most powerful tools available with the fewest possible restrictions.
What's genuinely new is that a private company, not Congress or a court, is the entity drawing the ethical line. The fact corporations have become more morally serious than our government is a terrifying sign.
The age of AI warfare is here. And we're apparently letting a Friday afternoon deadline decide what the rules are.
Stay Frustrated.


