Judge Rita Lin's preliminary injunction runs forty-three pages on a question different from the one everyone thought this case would answer. The public framing was about whether an AI company could maintain safety guardrails when a sovereign customer demanded otherwise. Lin's actual decision was narrower, more procedural: the Pentagon's own records showed it designated Anthropic a supply chain risk because of the company's "hostile manner through the press." Punishing a company for speaking publicly is, she wrote, classic First Amendment retaliation.
The question of whether safety commitments can survive sovereign deployment demands went unaddressed. Lin was explicit:
"Everyone, including Anthropic, agrees that the Department of [Defense] is free to stop using Claude and look for a more permissive AI vendor."
The substantive question was deferred. But by the time she ruled in late March, the market had already filled the silence.
Within days of the original blacklisting in February, ten defense-tech portfolio companies under a single venture firm began replacing Claude. Lockheed Martin started swapping out Anthropic's models. Piper Sandler warned clients of disruptions across defense integrators. A rival announced its own Pentagon deal hours after the blacklisting. Program teams did what program teams do: they weighed operational continuity against regulatory uncertainty and chose to de-risk. Contracts signed, migrations begun, relationships restructured around a conclusion no judge had endorsed.
The market acted as if safety commitments were a liability in sovereign contracting. Whether they actually are remains judicially untouched. Bloomberg Tax commentary noted the ruling doesn't clarify what many had assumed: that private contractors can place meaningful limits on how the government uses their intellectual property. Lawfare's analysis of whether the Defense Production Act could compel Anthropic to retrain Claude called the question "genuinely unsettled." No court has ruled on whether AI model architecture constitutes protected speech.
Anthropic's case worked because the retaliation was so legible, so clearly tied to public speech. A quieter version of the same pressure, applied through procurement scoring rather than presidential directives, would generate no constitutional claim at all. And even with the injunction in hand, the procurement behavior it set in motion may not reverse. As one legal analyst observed, defense contractors eager to maintain favorable Pentagon relations may now lack motivation to partner with Anthropic regardless of the legal outcome. The full litigation could take a year or more.
I keep returning to the assumption this exposed. Most of us expected safety commitments to be tested somewhere with briefs and discovery and published opinions other companies could cite. What played out was faster and less legible. Procurement offices read the political weather. A judge could tell them what the law required; what they wanted to know was which vendor wouldn't create problems.
The court preserved a procedural principle while the market moved on the substantive question the court left open.
Whether that procurement response proves durable is unclear. Political winds shift. Injunctions hold or don't. The companies that migrated away could migrate back, though the switching costs make that less likely with each passing month. The demonstrated mechanism, though, may be harder to undo than any particular contract decision. The next time an AI provider faces pressure to relax safety restrictions from a government customer, there may not be a First Amendment hook to hang on at all.
Things to follow up on...
-
Two lawsuits, not one: A separate challenge in DC federal court to the supply chain risk designation under a different statute remains pending, and Pentagon CTO Emil Michael has argued Lin's injunction doesn't cover it at all.
-
The unsettled speech question: Lawfare published an analysis exploring whether the Defense Production Act could compel Anthropic to retrain Claude, finding that if model training decisions are editorial choices, forcing a company to strip guardrails may amount to compelled speech.
-
Reliability as the other deployment barrier: Princeton researchers found that even the best-performing agents score just 85% on overall reliability, with catastrophic-mistake avoidance as low as 25%, suggesting safety governance isn't the only thing standing between AI models and high-stakes deployment.
-
Internal backlash at the rival: After announcing its Pentagon deal hours post-blacklisting, the rival's CEO acknowledged the timing was "sloppy" and amended the contract to include surveillance restrictions, while employees signed an open letter supporting Anthropic.

