AI in Warfare: Claude Developer Disputes with US Military & Anthropic’s Restrictions

by Priya Shah – Business Editor

Defense Secretary Pete Hegseth has given Anthropic, the artificial intelligence firm behind the Claude model, until the conclude of this week to grant the U.S. Military full access to its AI technology, according to sources familiar with the matter.

The demand, delivered during a meeting at the Pentagon on Tuesday, represents a significant escalation in a dispute over the military’s use of AI and the safeguards Anthropic seeks to impose. Officials are considering invoking the Defense Production Act to compel Anthropic’s cooperation, the sources said.

The Pentagon awarded Anthropic a $200 million contract in July to develop AI capabilities intended to bolster U.S. National security. However, Anthropic has repeatedly sought assurances that Claude will not be used for mass surveillance of American citizens or employed in autonomous weapons systems, a position that has drawn criticism from defense officials.

According to a senior Pentagon official, the military’s request is limited to lawful activities and does not involve mass surveillance or autonomous weapons. However, Anthropic CEO Dario Amodei is concerned about the potential for unintended consequences, including lethal errors or escalation, if Claude is used for targeting decisions without human oversight, one source said.

The disagreement comes after reports that the U.S. Military utilized Claude in the capture of Venezuelan leader Nicolás Maduro last month. This operation appears to have been a catalyst for the current standoff, as Anthropic reassessed the parameters of its agreement with the Department of Defense.

The Pentagon is also reportedly seeking similar access from other AI companies, including OpenAI, Google, and xAI, with Elon Musk’s xAI already reportedly on board for use in classified settings. The government’s push for broader access to AI technology reflects a growing reliance on these tools for intelligence collection, weapons development, and battlefield operations.

Defense officials have threatened to designate Anthropic as a “supply chain risk” and potentially cancel the $200 million contract if the company does not yield to their demands. This move would significantly hinder Anthropic’s ability to work with the U.S. Government and could set a precedent for future engagements between the AI industry and the military.

The dispute highlights a broader debate within the AI industry regarding the ethical implications of military applications. Anthropic positions itself as a leader in AI safety, and its resistance to unfettered access to Claude underscores the tension between innovation and responsible development. The outcome of the negotiations could influence whether other AI firms adopt similar safeguards or succumb to government pressure.

As of Wednesday, Anthropic had not publicly responded to Hegseth’s ultimatum. The company has until the end of the week to provide a signed document granting the military full access to Claude, or face potential penalties.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.