Pentagon Reviews Anthropic AI Deal Over Ethical Concerns | Defense AI
The Pentagon is reviewing its relationship with Anthropic, a leading artificial intelligence firm, over disagreements regarding the permissible uses of its Claude AI model, according to a statement released by chief Pentagon spokesperson Sean Parnell. The review comes amid growing concerns within the Defense Department about limitations placed on the technology’s application to military operations.
At issue is Anthropic’s reluctance to allow unrestricted use of Claude, specifically regarding its potential deployment in the development of autonomous weapons systems and large-scale surveillance activities. Anthropic reportedly seeks to prevent its AI from being used to create weaponry that operates without human oversight and to safeguard against the mass surveillance of American citizens.
“Our nation requires that our partners be willing to help our warfighters win in any fight. This is about our troops and the safety of the American people,” Parnell stated. The Pentagon, now referred to internally as the Department of War following a directive from President Donald Trump, insists that any company profiting from defense contracts must accommodate the military’s operational needs, provided those needs are lawful.
Undersecretary of Defense for Research and Engineering Emil Michael has publicly urged Anthropic to “cross the Rubicon” and fully embrace military applications of its AI. Michael emphasized that the Department of War requires “guardrails…tuned for military applications,” asserting that it is unacceptable for an AI vendor to restrict the technology’s use in ways that hinder national defense. He stated that companies seeking government contracts should be willing to align their AI’s capabilities with lawful military use cases.
The dispute extends beyond a simple contractual disagreement. Reports indicate the Defense Department has already been utilizing Claude through its partnership with Palantir, a technology company with extensive military contracts. The Wall Street Journal reported that Claude was used in support of operations, including the capture of former Venezuelan President Nicolás Maduro. This revelation has intensified scrutiny of the Pentagon’s existing reliance on Anthropic’s technology despite the ongoing ethical debate.
The Pentagon is also reportedly pressing other AI developers – OpenAI, Google, and xAI – to grant similar access to their AI tools for weapons development, intelligence gathering, and battlefield operations. The outcome of the review of the Anthropic relationship remains uncertain, but Michael expressed hope that the company would ultimately adopt a more accommodating stance. As of February 23, 2026, Anthropic has not publicly responded to the Pentagon’s demands.
