Judge Warns That Pentagon’s Efforts to Undermine Anthropic Are Concerning

Judge Warns That Pentagon's Efforts to Undermine Anthropic Are Concerning

The US Department of Defense seems to be unjustly penalizing Anthropic for its efforts to limit the military’s use of its AI technologies, according to US District Judge Rita Lin during a court session on Tuesday.

“It appears to be an effort to undermine Anthropic,” Lin commented on the Pentagon’s classification of the company as a supply-chain risk. “It seems like [the department] is retaliating against Anthropic for attempting to bring public attention to this contract conflict, which would obviously breach the First Amendment.”

Anthropic has initiated two federal lawsuits claiming that the Trump administration’s move to label it a security threat constituted illegal retribution. This designation followed Anthropic’s advocacy for restrictions on military applications of its AI. The hearing on Tuesday was linked to a case filed in San Francisco.

Anthropic is pursuing a temporary order to halt the designation. The company hopes this relief will reassure some of its hesitant clients to remain on board for a little longer. Lin can grant a pause only if she believes Anthropic is likely to prevail in the overall case. Her decision on the injunction is anticipated in the coming days.

The situation has ignited a wider public discourse on the increasing involvement of artificial intelligence in military operations and whether tech companies in Silicon Valley should defer to government guidelines regarding the deployment of their innovations.

The Department of Defense, now referring to itself as the Department of War (DoW), contends that it followed appropriate protocols in concluding that Anthropic’s AI tools were no longer dependable for critical operations. It has urged Lin not to second-guess its evaluation of the threat that Anthropic poses to national security.

“The concern is that Anthropic, rather than simply voicing objections, may assert that there’s an issue with what DoW is doing and could potentially alter the software … so it does not function as DoW anticipates,” said Trump administration attorney Eric Hamilton during the hearing.

Lin remarked that it is the duty of Defense Secretary Pete Hegseth—not hers—to determine if Anthropic is a suitable vendor for the department. However, she stated it is her responsibility to assess whether Hegseth overstepped legal boundaries beyond merely terminating Anthropic’s government contracts. She expressed concern that the security designation and directives restricting the use of Anthropic’s AI tool, Claude, by government contractors “do not appear to align with articulated national security concerns.”

As tensions between Anthropic and the government escalated last month, Hegseth posted on X that “effective immediately, no contractor, supplier, or partner that does business with the United States military may engage in any commercial activities with Anthropic.”

However, on Tuesday, Hamilton conceded that Hegseth lacks the legal power to prevent military contractors from using Anthropic for projects unrelated to the Department of Defense. When Lin inquired about Hegseth’s reasoning for the post, Hamilton replied, “I don’t know.”

Lin pressed Hamilton on whether the Pentagon had contemplated less severe alternatives before designating Anthropic as a supply-chain risk. She characterized this designation as a potent authority typically reserved for foreign adversaries, terrorists, and other hostile entities.

Michael Mongan, an attorney from WilmerHale representing Anthropic, emphasized that it was unusual for the government to pursue a “stubborn” negotiation partner with such a designation.

The Pentagon has stated that it is working to phase out Anthropic technologies over the upcoming months in favor of alternatives from Google, OpenAI, and xAI. It also mentioned measures have been implemented to prevent Anthropic from making any unauthorized changes during the transition. Hamilton indicated uncertainty about whether Anthropic could modify its AI models without Pentagon approval; the company asserts it cannot.

A ruling on the other case, currently pending in the federal appeals court in Washington, DC, is expected soon without any hearing.

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant