Anthropic vs Pentagon: Judge Questions AI Security Threat Designation
SAN FRANCISCO (AP) — A federal judge expressed deep skepticism Tuesday about the Pentagon’s rationale for designating Anthropic, a leading artificial intelligence company, as a security risk, following a dispute over the military’s intended use of its AI technology. U.S. District Judge Rita Lin repeatedly questioned the Trump administration’s justification for the move, which effectively blacklisted the San Francisco-based startup from some government contracts.
The hearing centered on Anthropic’s lawsuit alleging the Pentagon’s actions were an illegal retaliation for the company’s insistence on safeguards preventing its AI from being deployed in fully autonomous weapons systems or used for domestic surveillance, as reported by the Associated Press. Anthropic had demanded limitations on how its technology, including the Claude chatbot, could be utilized by the military. The Pentagon responded by labeling Anthropic a “supply chain risk,” a designation typically reserved for entities linked to foreign adversaries.
“What is troubling to me about these actions is they don’t seem to be tailored to the national security concerns,” Lin stated during the 90-minute hearing. She pressed government lawyers to explain why the administration took such a drastic step, questioning whether the designation was a legitimate assessment of risk or an attempt to punish Anthropic for voicing its concerns.
Anthropic’s legal team argued the administration’s actions were an “unlawful campaign of retaliation” that has already damaged the company’s reputation and threatens its future growth. Lawyer Michael Mongan told the court that Anthropic has suffered “irreparable and mounting injuries” as a result of the designation. The company is seeking a temporary order to halt the designation although the case proceeds.
The dispute has ignited a broader debate within Silicon Valley about the ethical implications of AI development and the appropriate level of government oversight. According to a Los Angeles Times report, tech leaders have quietly expressed support for Anthropic, arguing that AI technology is not yet ready for deployment in weapons systems and that strong-arming companies could stifle innovation. The case also highlights a growing tension between the tech sector and the Trump administration, which has taken a more assertive stance on national security issues.
The Department of Defense, now referred to internally as the Department of War (DoW), maintains it followed proper procedures in assessing Anthropic’s AI tools and determined they could not be relied upon during critical moments. Justice Department lawyer Eric Hamilton asserted that the administration should be given “substantial deference” in determining what constitutes a security risk and that Anthropic had proven itself to be an “untrustworthy and unreliable partner” during negotiations. He also stated the DoW “will continue to direct its operations without tech company influence.”
President Trump publicly criticized Anthropic in late February, calling the company a part of the “radical, woke left” and ordering federal employees to cease using its technology. The Pentagon was given six months to phase out Anthropic’s technology, which is currently integrated into classified military platforms, including those used in operations related to the Iran conflict. This public rebuke, coupled with a similar statement from Defense Secretary Pete Hegseth, raised concerns that Anthropic could lose other key government contracts, though the administration has since tempered those suggestions in court filings.
Judge Lin has requested additional evidence from both sides by Wednesday and indicated she expects to issue a ruling on Anthropic’s request for a temporary injunction before the end of the week. The outcome of the case could have significant implications for the future of AI development and the relationship between the tech industry and the U.S. Military, as well as potentially influencing which defense-tech companies, particularly those in Southern California, will benefit from government contracts, according to the Los Angeles Times.
The Pentagon is already exploring alternatives to Anthropic’s technology, with plans to utilize AI solutions from Google, OpenAI and xAI, as reported by WIRED.
