Anthropic Files Lawsuit Against the Department of Defense Over Supply Chain Risk Classification

On Monday, Anthropic initiated a federal lawsuit against the US Department of Defense and various federal agencies, contesting its classification of the AI firm as a “supply-chain risk.”
The Pentagon formally flagged Anthropic the previous week, culminating in a publicly visible debate regarding restrictions on its generative AI technology for military uses such as autonomous weapons.
“We do not find this action to be legally valid, and we feel compelled to contest it in court,” stated Anthropic CEO Dario Amodei in a Thursday blog post.
Filed in a federal court in California, the lawsuit aims for a judge to annul the designation and prevent federal agencies from enforcing it. “The Constitution does not permit the government to utilize its vast authority to penalize a company for its protected expression,” Anthropic asserted in its filing. “Anthropic turns to the judiciary as a final measure to assert its rights and halt the Executive’s unlawful retaliatory campaign.”
Anthropic is requesting a temporary restraining order to continue its government sales. The company suggested that the government respond to this request by 9 pm Pacific on Wednesday, with a judge hearing the matter on Friday.
The AI startup, which has developed a range of AI models named Claude, risks forfeiting hundreds of millions in annual revenue from the Pentagon and other US government entities. It may also lose clients that integrate Claude into services for federal agencies, as several Anthropic customers have reportedly explored alternative options due to the Defense Department’s designation as a risk.
Amodei mentioned that the “vast majority” of Anthropic’s clients should not face changes. He indicated that the US government’s designation “clearly pertains only to the use of Claude by clients as part of contracts with the” military, suggesting that the general application of Anthropic’s technologies by military contractors should remain unaffected.
The Department of Defense, also referred to as the Department of War, declined to comment on the lawsuit filed by Anthropic.
White House spokesperson Liz Huston stated to WIRED on Friday that “our military will adhere to the United States Constitution—not to the stipulations of any progressive AI company.” She emphasized the administration’s commitment to ensuring its “brave warfighters have the necessary tools for success and will guarantee that they are never held hostage by the ideological agendas of Big Tech leaders.”
Legal experts in government contracting suggest Anthropic faces a challenging road ahead in court. The regulations empowering the Department of Defense to categorize a tech firm as a supply-chain risk provide limited recourse for appeal. “It’s entirely within the government’s discretion to define the terms of a contract,” remarked Brett Johnson, a partner at the law firm Snell & Wilmer. He noted that the Pentagon has the authority to indicate that a product of concern used by its suppliers “impedes the government’s ability to fulfill its mission.”
According to Johnson, Anthropic’s most viable path to success in court may hinge on demonstrating that it was specifically singled out. Shortly after Defense Secretary Pete Hegseth announced his supply-chain risk designation for Anthropic, competitor OpenAI revealed it had secured a new contract with the Pentagon. This could bolster Anthropic’s legal stance if the firm can show it sought similar conditions as those granted to the ChatGPT creator.
OpenAI indicated that its agreement included contract and technical measures to ensure its technology was not used for mass domestic surveillance or to manage autonomous weapons systems. The company opposed the action against Anthropic and expressed uncertainty as to why its rival could not negotiate similar terms with the government.
Military Priority
Hegseth has made military integration of AI technologies a priority, with posters recently displayed in the Pentagon depicting him urging, “I want you to use AI.” The disagreement with Anthropic escalated in January when Hegseth instructed several AI suppliers to consent to the department’s freedom to use their technologies for any lawful purpose.
Anthropic, the sole provider of AI chatbot and analysis tools for the military’s most sensitive operations, has challenged this position. The company asserts that its technologies are not yet sufficiently advanced for mass domestic surveillance or completely autonomous weapons. Hegseth has claimed that Anthropic seeks veto power over decisions that should remain with the Defense Department.
