The Pentagon moved Friday to designate Anthropic, a leading artificial intelligence company, as a “supply-chain risk,” effectively barring contractors working with the U.S. Military from doing business with the startup. The move, announced by Defense Secretary Pete Hegseth on social media, follows weeks of escalating tension between the Pentagon and Anthropic over the permissible uses of its AI models.
Hegseth’s order prevents any entity with a Defense Department contract from engaging in commercial activity with Anthropic. The designation stems from Anthropic’s refusal to grant the military unrestricted access to its AI technology, Claude, and its concerns about potential misuse, including mass surveillance and the development of autonomous weapons systems. Anthropic had proposed limitations on how its technology could be used, a position the Pentagon rejected in favor of allowing the military to apply the AI to “all lawful uses,” according to reports.
Anthropic responded Friday evening with a statement vowing to “challenge any supply chain risk designation in court,” arguing that such a move would “set a dangerous precedent for any American company that negotiates with the government.” The company similarly stated it had not received direct communication from the Department of Defense or the White House regarding the negotiations. Anthropic further asserted that Hegseth lacks the statutory authority to enforce the broad restrictions implied in his announcement.
The Pentagon declined to offer further comment on the matter.
The decision has sparked widespread criticism from experts in the field. Dean Ball, a senior fellow at the Foundation for American Innovation and former AI policy advisor at the White House, called the move “the most shocking, damaging, and overreaching thing I have ever seen the United States government do,” suggesting it amounted to a sanction against an American company. Paul Graham, founder of Y Combinator, echoed this sentiment, attributing the action to an “impulsive and vindictive” administration.
OpenAI CEO Sam Altman announced Friday night that his company had reached an agreement with the Department of Defense to deploy its AI models in classified environments, with specific safeguards in place. “Two of our most critical safety principles are prohibitions on domestic mass surveillance and human responsibility for the use of force, including for autonomous weapon systems,” Altman stated. “The DoW agrees with these principles, reflects them in law and policy, and we put them into our agreement.”
The implications of the supply-chain-risk designation for Anthropic’s existing customers remain unclear. Anthropic maintains that the designation, under 10 USC 3252, applies only to direct contracts with the Department of Defense and does not extend to contractors using its Claude AI software for other purposes. Yet, legal experts are divided on the scope of the restriction. Alex Major, a partner at the law firm McCarter & English, noted that Hegseth’s announcement “is not mired in any law You can divine right now,” leaving the extent of the impact uncertain.
The dispute originated in part from the military’s use of Claude during an operation to capture former Venezuelan President Nicolás Maduro in January, according to CBS News. Anthropic stated it had not been informed of this specific operation. The Pentagon awarded Anthropic a $200 million contract in July 2025 to develop AI capabilities for national security purposes, alongside similar contracts awarded to OpenAI, Google, and xAI.