The Pentagon has initiated an assessment of Boeing and Lockheed Martin’s reliance on Anthropic’s Claude AI model, a move that could lead to the AI firm being designated a “supply chain risk,” according to confirmations received Wednesday.
Lockheed Martin acknowledged receiving a request from the Defense Department to examine its exposure and dependence on Anthropic, ahead of a potential formal declaration. A spokesperson confirmed the inquiry, stating it precedes “a potential supply chain risk declaration.” Boeing has not yet publicly commented on the matter. The Pentagon contacted the two defense contractors on Wednesday, requesting the analysis, individuals familiar with the discussions reported.
Such a designation, typically reserved for companies with ties to adversarial nations, would be an unprecedented step for a leading American technology company, particularly one whose software is integrated into classified military systems. The action signals a growing concern within the Pentagon regarding the potential vulnerabilities associated with relying on commercial AI technologies.
The impetus for the Pentagon’s scrutiny stems from Anthropic’s firm stance against adapting its Claude AI model for lethal military applications. Despite a recent high-level meeting between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth, the company has maintained its “safety-first” usage policies, prohibiting the use of its AI for purposes that could directly contribute to harm. This refusal to modify its policies has prompted the Defense Department to consider Anthropic a potential risk to the defense supply chain.
For Boeing and Lockheed Martin, who have increasingly incorporated generative AI into logistics and simulation platforms, a “supply chain risk” designation would necessitate a costly and complex decoupling from Anthropic’s ecosystem. The Pentagon intends to broaden the inquiry to include other major defense contractors responsible for supplying critical military hardware to determine the extent of Claude’s integration into their workflows.
The move by the Pentagon comes amid broader discussions about the role of artificial intelligence in national security and the challenges of balancing innovation with safety and ethical considerations. The situation highlights the tension between the military’s desire to leverage the capabilities of advanced AI and the concerns of AI developers regarding the potential misuse of their technologies.