President Donald Trump ordered all U.S. Federal agencies to cease using technology developed by Anthropic, following a dispute with the Pentagon over the artificial intelligence company’s safety protocols. The directive, issued Friday, came after Anthropic failed to meet a Pentagon deadline to allow unrestricted military use of its AI systems.
Defense Secretary Pete Hegseth subsequently classified Anthropic as a supply chain risk to national security, prohibiting any contractor, supplier, or partner working with the U.S. Military from conducting commercial activities with the company. Hegseth stated on X, formerly known as Twitter, that Anthropic would continue to provide services to the Pentagon for up to six months to ensure a “smooth transition to a better and more patriotic service.”
The conflict centers on Anthropic’s insistence that its AI technology not be used for domestic mass surveillance or in the development of fully autonomous weapons. Founded five years ago, Anthropic has become a leading AI firm, known for its Claude model, used by millions globally. Unlike models from OpenAI and Google, Anthropic’s AI prioritizes safety, granting it a unique status within the Pentagon for handling classified information.
Hegseth had demanded greater flexibility in the deployment of Anthropic’s AI, asserting that the Defense Department should collaborate only with companies willing to approve “any lawful use” of their software. According to sources familiar with the discussions, Hegseth threatened to cancel Anthropic’s $200 million contract with the Defense Department, or potentially blacklist the company from future military work.
The same day Trump issued his directive, OpenAI, the developer of ChatGPT, announced an agreement with the U.S. Military to deploy its AI models within classified cloud networks. OpenAI CEO Sam Altman stated on X that the Defense Department had demonstrated “a deep respect for safety and a desire for partnership to achieve the best possible outcome.” Altman added that his company would as well develop technical safeguards to ensure the AI models function as intended.
Anthropic CEO Dario Amodei has repeatedly voiced concerns about the potential misuse of AI, characterizing such applications as “illegitimate” and “prone to abuse.” The company’s stance has positioned it at odds with the Pentagon’s desire for broader access to AI capabilities.
As of Saturday, February 28, 2026, Anthropic has not publicly responded to Trump’s order or Hegseth’s designation as a national security risk.