OpenAI in Talks with Pentagon for AI Deal After Anthropic Contract Ends

by Priya Shah – Business Editor

President Donald Trump directed all federal agencies to halt the use of technology from Anthropic, the AI developer, Friday, escalating a dispute over the limits of artificial intelligence in national security and defense. The move came as OpenAI, a rival firm, is reportedly nearing an agreement with the Department of War to provide its AI models and tools, a deal that hinges on maintaining control over safety measures.

“I am directing every federal agency in the United States government to immediately cease all use of Anthropic’s technology. We don’t need it, we don’t want it, and will not do business with them again!” Trump stated in a post on Truth Social. The Department of War and other agencies utilizing Anthropic’s Claude models will have six months to phase out the technology, according to the announcement.

The clash with Anthropic centers on the company’s refusal to grant the Pentagon unrestricted access to its AI systems. Defense officials insisted that AI models must be available for “all lawful purposes,” while Anthropic resisted demands to remove safeguards preventing the use of its technology for domestic mass surveillance or the development of fully autonomous weapons. Anthropic could lose a contract worth up to $200 million for non-compliance, a warning issued by the Pentagon prior to Trump’s directive.

Meanwhile, OpenAI CEO Sam Altman informed employees Friday that a potential agreement is emerging with the Department of War. According to a source present at an all-hands meeting, Altman indicated the government is willing to allow OpenAI to construct its own “safety stack”—a multi-layered system of technical, policy, and human controls—and will not compel the company to override its AI’s refusal to perform specific tasks. OpenAI would retain control over the implementation of safeguards, model deployment, and restrict operations to cloud environments, avoiding integration into “edge systems” like aircraft or drones.

A key concession from the government, Altman told employees, is a willingness to incorporate OpenAI’s “red lines” into the contract, explicitly prohibiting the use of AI for autonomous weapons, domestic surveillance, or critical decision-making processes. OpenAI’s head of national security policy, Sasha Baker, and Katrina Mulligan, who leads national security for OpenAI for Government, also participated in the all-hands meeting, according to the source.

The shift in focus toward OpenAI follows a breakdown in the relationship between the Pentagon and Anthropic, partially attributed to concerns over communication from Anthropic cofounder and CEO Dario Amodei, including blog posts that reportedly “upset” Department of War leadership. Anthropic had previously been the sole large commercial AI provider with models approved for use within the Pentagon’s classified systems, operating through a partnership with Palantir.

Altman has publicly affirmed that OpenAI shares Anthropic’s concerns regarding the military application of AI, emphasizing the importance of adhering to established “red lines.” He told CNBC Friday that working with the Pentagon is important “as long as it is going to comply with legal protections” and those limitations. The Pentagon declined to comment on the status of negotiations with OpenAI.

During the OpenAI all-hands meeting, concerns were raised regarding the potential for foreign surveillance and the threat to democratic principles posed by AI-driven monitoring. Company leaders acknowledged the necessity of international surveillance for national security purposes, citing intelligence reports indicating China’s use of AI to target dissidents abroad.

You may also like

Leave a Comment

This site uses Akismet to reduce spam. Learn how your comment data is processed.