Defense Secretary Pete Hegseth declared Anthropic, an artificial intelligence firm, a “supply chain risk to national security” on Friday, escalating a public dispute over the company’s attempts to limit how the Pentagon utilizes its AI technology. The move effectively bars contractors working with the U.S. Military from doing business with Anthropic, a decision Hegseth announced on X, stating, “America’s warfighters will never be held hostage by the ideological whims of Big Tech. This decision is final.”
The action follows days of increasingly tense exchanges between the Pentagon and Anthropic, culminating in President Trump ordering all federal agencies to cease using Anthropic’s AI, though the Defense Department and other select agencies have been granted a six-month transition period, according to the Associated Press.
Anthropic responded to Hegseth’s designation by vowing to challenge it in court, characterizing the move as “legally unsound” and warning it would establish a “dangerous precedent” for any American company engaging in negotiations with the government. The company contends that Hegseth lacks the authority to prohibit military contractors from collaborating with Anthropic, arguing that a supply chain risk designation should only apply to direct Pentagon contracts.
The conflict stems from a $200 million contract awarded to Anthropic last July to develop AI capabilities for national security purposes. Hegseth accused Anthropic of attempting to “strong-arm the United States military into submission” and asserted he would not permit the company to exert “veto power over the operational decisions of the United States military.” He further stated that Anthropic’s position “is fundamentally incompatible with American principles.”
According to a source familiar with the interactions, Hegseth met with Anthropic CEO Dario Amodei earlier this week, requesting the company commit to allowing “all lawful employ” of its models. The source, who was not authorized to speak publicly, indicated that Hegseth expressed satisfaction with Anthropic’s products and a desire to continue the working relationship.
Experts suggest the dispute may be less about specific AI deployment concerns and more about a fundamental clash of perspectives. Michael Horowitz, an expert on the military application of AI and former Deputy Assistant Secretary for emerging technologies at the Pentagon, described the disagreement as “unnecessary,” focusing on “theoretical use cases that are not on the table for now.” Horowitz noted that Anthropic has, to date, supported the Department of Defense’s proposed uses of its technology, adding, “My sense is that the Pentagon and Anthropic agree at present about the use cases where the technology is not ready for prime time.”
Anthropic was founded on principles of AI safety, and in January, CEO Amodei published a blog post outlining the risks associated with powerful artificial intelligence, including the potential dangers of fully autonomous AI-controlled weapons. Amodei acknowledged the legitimate defensive applications of such weapons but cautioned that they represent a “dangerous weapon to wield.”
The Pentagon’s designation of Anthropic as a supply chain risk, as reported by CBS News, has broad implications given the extensive network of companies that contract with the Department of Defense. The legal challenge promised by Anthropic is expected to further complicate the situation, potentially setting a precedent for future interactions between the government and AI developers.