Justice Department States Anthropic is Unreliable for Waging War Systems

The Trump administration contended in a court filing on Tuesday that it did not infringe upon Anthropic’s First Amendment rights by labeling the AI developer as a supply-chain risk and anticipated that the company’s lawsuit against the government would not succeed.
“The First Amendment does not grant a right to unilaterally dictate contract terms to the government, and Anthropic has failed to provide any support for such an extreme assertion,” stated attorneys from the US Department of Justice.
The response was submitted in a federal court in San Francisco, one of the two jurisdictions where Anthropic is contesting the Pentagon’s decision to classify the company with a designation that could prevent businesses from securing defense contracts due to potential security risks. Anthropic claims the Trump administration exceeded its jurisdiction by applying the label and hindering the use of the company’s technologies within the department. If the label remains, Anthropic could forfeit billions in expected revenue this year.
Anthropic seeks to continue normal operations until the legal dispute is settled. Rita Lin, the judge presiding over the San Francisco case, has set a hearing for next Tuesday to determine whether to grant Anthropic’s request.
In the filing from Tuesday, Justice Department attorneys, representing the Department of Defense and other agencies, characterized Anthropic’s worries about potentially losing business as “legally insufficient to constitute irreparable injury,” urging Lin to reject the company’s plea.
The attorneys also indicated that the Trump administration acted out of “concerns regarding Anthropic’s potential future behavior if it continued to access” governmental technology systems. “No one has claimed to restrict Anthropic’s expressive activities,” they asserted.
The government maintains that Anthropic’s efforts to limit how the Pentagon can utilize its AI technology prompted defense secretary Pete Hegseth to determine that “Anthropic staff might sabotage, maliciously introduce unwanted functions, or otherwise undermine the design, integrity, or operation of a national security system.”
The Department of Defense and Anthropic have been engaged in disputes over potential limitations on the company’s Claude AI models. Anthropic believes that its models should not facilitate broad surveillance of Americans and are not currently dependable enough for fully autonomous weapons.
Several legal experts previously indicated to WIRED that Anthropic has a robust argument asserting that the supply-chain measure constitutes illegal retaliation. However, courts generally favor national security claims from the government, and Pentagon officials have characterized Anthropic as a contractor acting outside accepted norms, asserting that its technologies are untrustworthy.
“Specifically, DoW grew concerned that allowing Anthropic continued access to DoW’s technical and operational warfighting infrastructure would introduce unacceptable risks into DoW supply chains,” stated the Tuesday filing. “AI systems are particularly vulnerable to manipulation, and Anthropic could seek to disable its technology or preemptively alter the behavior of its model either prior to or during active combat operations if Anthropic—at its discretion—believes its corporate ‘red lines’ are being crossed.”
The Defense Department and various federal agencies are planning to replace Anthropic’s AI tools with offerings from rival tech firms in the coming months. Insights reveal that one of the military’s primary applications of Claude is via Palantir data analysis software.
In the Tuesday filing, the lawyers contended that the Pentagon “cannot simply switch off a system when Anthropic is currently the sole AI model approved for use” within the department’s “classified systems while high-intensity combat operations are ongoing.” The department is evaluating AI systems from Google, OpenAI, and xAI as alternatives.
Numerous companies and organizations, including AI researchers, Microsoft, a federal employee labor union, and former military leaders, have submitted court briefs endorsing Anthropic. No briefs have been filed in support of the government.
Anthropic has until Friday to present a counter response to the government’s claims.
