Pentagon Pressures Anthropic to Lift AI Restrictions: Weapons & Surveillance at Stake

by Rachel Kim – Technology Editor

Defense Secretary Pete Hegseth has given Anthropic, an artificial intelligence firm, until the end of the week to grant the U.S. Military unrestricted access to its AI models or face potential penalties, including being designated a “supply chain risk.” The ultimatum, delivered Tuesday, escalates a monthslong dispute over the ethical boundaries of military applications of advanced AI.

The standoff began after Anthropic reportedly discovered its Claude AI model may have been used in connection with the January 3rd attack in Venezuela, according to sources familiar with the matter. Following that incident, Anthropic CEO Dario Amodei communicated to the Pentagon that the leverage of its technology for mass surveillance of U.S. Citizens and in fully autonomous weapons systems were “bright red lines,” requiring “extreme care and scrutiny combined with guardrails to prevent abuses.”

Hegseth, in a meeting with Amodei, pressed for broader access to Claude’s capabilities, according to reports. The Department of Defense (DoD) has already integrated Claude into some operations, but seeks to remove limitations imposed by Anthropic. A source cited by the Associated Press said the talks were “cordial” but that Anthropic remained firm in its restrictions.

The potential designation as a “supply chain risk” would effectively bar other defense contractors from utilizing Anthropic’s AI in their work for the Pentagon, a significant blow to the company’s business prospects. According to CNBC, the DoD is also considering other punitive measures.

Anthropic, which positions itself as a leader in AI safety, became the first AI company cleared to handle classified information in 2025. The current dispute highlights a growing tension between the government’s desire to leverage AI for national security purposes and the ethical concerns raised by AI developers. The company has publicly outlined its core views on AI safety and the constitution guiding its LLM, Claude, which emphasize responsible development, and deployment.

The outcome of the negotiations could set a precedent for the entire AI industry, determining whether companies will push back against government demands for military applications of their technology. The standoff is being closely watched by researchers and advocates for ethical AI, who fear the unchecked use of AI in warfare and surveillance.

As of Wednesday, February 25, 2026, Anthropic has not publicly responded to the ultimatum beyond a statement indicating continued “solid-faith conversations” with the government. The Friday deadline looms, with the future of the DoD’s relationship with Anthropic – and potentially the broader AI industry – hanging in the balance.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.