Anthropic & Pentagon Clash: AI Safety vs. Military Use

by Emma Walker – News Editor

The Pentagon is reviewing its relationship with Anthropic, the artificial intelligence company behind the Claude chatbot, amid concerns over the military’s desire for unfettered access to the AI system. The escalating tensions stem from a disagreement over the permissible uses of Anthropic’s technology, potentially jeopardizing a defense contract worth up to $200 million.

Hints of a rift became more pronounced following reports in The Wall Street Journal and Axios detailing the possible use of Anthropic products in the operation to capture Venezuelan President Nicolás Maduro. While the exact role of Claude in the operation remains unclear, the reports prompted internal scrutiny within Anthropic, according to two individuals familiar with the matter who requested anonymity. These sources indicated the company maintains a high degree of visibility into how its AI tools are utilized, particularly in data analysis.

Anthropic, which first gained access to classified networks through a partnership with Palantir in 2024, has positioned itself as a leader in “responsible AI,” emphasizing limitations on how its technology can be deployed. Palantir announced the partnership, stating Claude could be used to “support government operations such as processing vast amounts of complex data rapidly” and “helping U.S. Officials to make more informed decisions in time-sensitive situations.” CNBC reported that Anthropic has consistently maintained it will not allow its systems to be used for lethal autonomous weapons or domestic surveillance.

The reported use of Anthropic’s technology in connection with the Venezuela raid allegedly raised concerns from an Anthropic employee, leading to a tense exchange between Palantir and Anthropic executives. According to Semafor, a Palantir executive expressed alarm when a senior Anthropic executive inquired whether their software was used during the raid, implying potential disapproval. A senior Pentagon official confirmed to NBC News that an Anthropic executive did question Palantir about the software’s use in the Maduro operation.

An Anthropic spokesperson declined to comment on whether Claude was used in the operation, citing the classified nature of military operations: “We cannot comment on whether Claude, or any other AI model, was used for any specific operation, classified or otherwise,” the spokesperson stated to NBC News. The spokesperson also asserted that the company has not engaged in unusual discussions with partners regarding Claude’s usage or expressed mission-related disagreements with the military. Palantir did not respond to a request for comment.

The core of the dispute lies in the Defense Department’s push for broader access to Anthropic’s systems. The Hill reported that the Pentagon now desires the ability to utilize all available AI systems for any purpose permitted by law. This stance clashes with Anthropic’s commitment to maintaining its own safety guardrails. Defense Secretary Pete Hegseth released an AI strategy document in January calling for the elimination of company-specific constraints on AI usage in Defense Department contracts, mandating “any lawful use” of AI within 180 days, a change that directly impacts Anthropic’s agreements.

Pentagon spokesman Sean Parnell stated that “The Department of War’s relationship with Anthropic is being reviewed.” He added, “Our nation requires that our partners be willing to help our warfighters win in any fight,” according to NBC News. Undersecretary of Defense Emil Michael told CNBC that negotiations with Anthropic have stalled due to disagreements over potential system uses.

Anthropic maintains This proves committed to supporting U.S. National security, having been the first frontier AI company to offer its models on classified networks. CEO Dario Amodei has publicly stated that “democracies have a legitimate interest in some AI-powered military and geopolitical tools,” but also cautioned that such tools should be deployed “carefully and within limits.”

Despite the current tensions, Anthropic continues to prioritize national security applications, forming a national security advisory council in August 2025 and recently adding Chris Lidell, a former deputy chief of staff to President Trump, to its board of directors. The company’s future collaboration with the Pentagon, though, remains uncertain as the Defense Department seeks to expand its access to frontier AI capabilities.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.