
Last Friday at 5:01 PM EST, a deadline passed. The Pentagon had given Anthropic, the company that makes Claude, until that moment to remove safeguards on its AI or face the consequences. Anthropic didn’t budge.
Within hours, President Trump ordered every federal agency to immediately stop using Anthropic’s technology. Defense Secretary Pete Hegseth designated the company a “supply-chain risk to national security,” a label typically reserved for foreign adversaries and applied for the first time ever to an American company. Hegseth posted on X: “Anthropic delivered a master class in arrogance and betrayal.”
Hours later, Sam Altman announced that OpenAI had struck its own deal with the Department of War.
The question of who controls AI, and what they’re allowed to do with it, just went from theoretical to very real.
Anthropic was the first AI company to deploy its models on the Pentagon’s classified networks. They had a contract. They had a working relationship. The dispute came down to two specific conditions Anthropic refused to drop.
The first: Claude could not be used for mass domestic surveillance of Americans. The second: Claude could not be used to power fully autonomous weapons, meaning systems capable of making a kill decision without a human in the loop. The Pentagon wanted both removed. Their position was that Anthropic had to agree to let the military use its AI for “all lawful purposes,” full stop. When Anthropic submitted revised contract language to address those concerns, a Pentagon spokesperson said it had made “virtually no progress.”
Anthropic CEO Dario Amodei explained his reasoning in an interview with CBS News shortly after the blacklisting. The company supports 98-99% of military use cases. On the two exceptions, he was specific. Today’s AI models carry a “basic unpredictability” problem that makes them too unreliable for lethal decisions without human oversight. On surveillance, the concern goes beyond what’s currently illegal. Powerful AI can stitch together location data, browsing history, and social associations into a detailed portrait of any American’s life at scale, and much of that is technically legal. “The technology’s advancing so fast that it’s out of step with the law,” Amodei said.
The Pentagon’s response was personal. Under Secretary of Defense Emil Michael called Amodei “a liar” with a “God complex” who wanted to “personally control the U.S. military.” Hegseth called the company “sanctimonious.” Trump, on Truth Social, called Anthropic “a radical left, woke company.”
Amodei’s response: “Disagreeing with the government is the most American thing in the world. And we are patriots.”
While Anthropic held the line, Sam Altman moved fast, and the timing raised eyebrows immediately.
On Thursday, before the Friday deadline, Altman had sent an internal memo stating that OpenAI shared the same “red lines” as Anthropic on autonomous weapons and surveillance. Then Friday night, with Anthropic freshly blacklisted, he announced OpenAI had signed its own deal with the Department of War.
Altman acknowledged the optics in an AMA on X over the weekend, saying the deal was “definitely rushed” and that it “looked opportunistic and sloppy.” By Monday he was amending the contract to include more explicit language around surveillance limits.
OpenAI’s defense was that their agreement does include the same guardrails Anthropic wanted, framed around existing law rather than explicit contract prohibitions. The argument being: if the government won't follow the law, a contract clause won't stop them anyway. Critics weren’t buying it. Techdirt noted that the deal’s reference to Executive Order 12333, a Cold War-era NSA order used to capture communications outside U.S. borders including data on Americans, may leave the door open to exactly the surveillance Anthropic refused to enable. An OpenAI employee working on AI alignment publicly criticized his own employer for using “all lawful purposes” language and wrapping it in what he called “window dressing.”
The credibility gap was hard to miss. Altman had aligned with Anthropic on Thursday. By Friday he’d taken the contract Anthropic walked away from.
The backlash was fast and loud.
A Reddit post calling on users to “Cancel and Delete ChatGPT” racked up 30,000 upvotes within hours. A thread in r/ChatGPT titled “You’re now training a war machine” became one of the most upvoted posts in the forum’s history. A grassroots campaign called QuitGPT claimed 1.5 million people took action. More than 700 employees at Google and OpenAI signed an open letter called “We Will Not Be Divided,” asking their own leadership to refuse the Pentagon’s demands for unrestricted AI access.
And Claude hit #1 on the Apple App Store.
Before Anthropic’s Super Bowl ads, Claude was ranked 42nd. The company had already been climbing - free active users up more than 60% since January, daily sign-ups quadrupled. The Pentagon standoff pushed it to the top. ChatGPT fell to #2. Gemini dropped to #4.
The legal picture is murkier. Lawfare’s analysis suggests the supply-chain risk designation may not survive court challenge. The statute requires the Pentagon to have exhausted less intrusive options first, and the speed of this escalation makes that hard to defend. But as Fortune pointed out, even a designation that gets overturned takes years to litigate. In the meantime, every general counsel at every Fortune 500 company with Pentagon exposure is asking whether using Claude is worth the risk. That chilling effect is real, regardless of how the courts eventually rule.
We’ve written before about why we use Claude at Matic - the quality gap, the enterprise focus, the design philosophy. What last week revealed is that the difference runs deeper than product decisions.
Anthropic was founded in 2021 by people who left OpenAI because they believed the company was moving too fast and prioritizing growth over safety. For years, that looked like a philosophical stance. Last week it got tested in public, with a $200 million contract on the line and the President of the United States weighing in.
They held.
Three days before the Pentagon deadline, Anthropic quietly overhauled its internal safety policy - dropping a long-standing pledge to pause AI development if safety measures couldn’t keep up. The reasoning: unilateral restraint only works if everyone restrains. Competitors weren’t. So Anthropic loosened the rules it set for itself, while holding firm on the rules it set for the Pentagon. One was a competitive concession. The other was a values line. They knew the difference.
OpenAI’s week told a different story. Altman defended a deal he admitted was rushed, amended a contract he’d said he was comfortable with, and watched his competitor climb to #1 in the App Store while his own employees questioned what they’d signed.
This plays out in brand positioning constantly. The companies that build around a real conviction and hold it when holding it is expensive don’t just win the news cycle. They build the kind of trust that actually converts. Amodei wasn’t performing values for a press release. He held them when it cost something real.
The Pentagon fight may have cost Anthropic a government contract and set up years of legal headaches. It may also have done more for their brand in five days than any campaign ever could.
The market seems to agree.
Questions about how AI fits into your business strategy? Let’s talk.

About Matic
We're a B2B transformation agency creating strategic advantage through branding, websites, and digital products.