Judge Halts Anthropic Supply-Chain Risk Designation

Anthropic has secured a preliminary injunction preventing the US Department of Defense from classifying it as a supply-chain risk, potentially allowing its customers to continue engagements with the company. The decision made by federal district judge Rita Lin in San Francisco on Thursday represents a notable setback for the Pentagon and a significant advantage for the generative AI firm as it seeks to maintain its business and reputation.
Judge Lin justified the temporary relief by stating, âThe defendantsâ classification of Anthropic as a âsupply chain riskâ is likely both legally unfounded and arbitrary.â She emphasized that the Department of War lacks a valid reason to assume that Anthropic’s clear stance on usage restrictions implies a potential for sabotage.
There was no immediate response from either Anthropic or the Pentagon regarding the ruling.
For the past couple of years, the Department of Defense, identifying itself as the Department of War, has utilized Anthropicâs Claude AI tools for drafting sensitive documents and analyzing classified information. However, this month, it began discontinuing the use of Claude after determining that Anthropic’s trustworthiness was compromised. Pentagon officials cited multiple occasions where Anthropic purportedly imposed usage restrictions that the previous administration deemed unnecessary.
The administration subsequently issued several orders, including the designation of the company as a supply-chain risk. This has effectively led to a gradual cessation of Claude’s use within the federal government and negatively impacted Anthropic’s sales and public image. In response, the company filed two lawsuits contesting the sanctions as unconstitutional. During a hearing on Tuesday, Lin noted that the government appeared to unlawfully âcrippleâ and âpunishâ Anthropic.
Linâs ruling on Thursday ârestores the status quoâ to what existed on February 27, prior to the directives being issued. She clarified, âThis does not prevent any defendant from pursuing lawful actions available to themâ at that time. âFor example, this order does not mandate the Department of War to utilize Anthropicâs products or services and does not inhibit the Department from opting for other AI providers, provided those actions comply with relevant regulations, statutes, and constitutional provisions.â
The ruling indicates that the Pentagon and other federal entities still have the discretion to terminate contracts with Anthropic and instruct contractors using Claude in their own tools to cease, but without citing the supply-chain risk classification as the justification.
The immediate ramifications remain uncertain as Linâs order will not take effect for a week. Additionally, a federal appeals court in Washington, DC has yet to issue a verdict on Anthropic’s second lawsuit, which addresses different legal concerns that also prevent the company from supplying software to the military.
However, Anthropic might leverage Linâs ruling to assure certain customers, wary of collaborating with an industry outsider, that legal support may be on its side in the long term. Lin has not established a timetable for a final ruling.
