Conflicting Decisions Put Anthropic in a State of ‘Supply-Chain Risk’ Uncertainty

Conflicting Decisions Put Anthropic in a State of 'Supply-Chain Risk' Uncertainty

Anthropic “has not met the strict criteria” needed to temporarily lift the supply-chain-risk designation imposed by the Pentagon, a US appeals court in Washington, DC, determined on Wednesday. This ruling contrasts with a previous one from a lower court judge in San Francisco, leaving unclear how to reconcile the conflicting preliminary decisions.

The government imposed sanctions on Anthropic under two separate supply-chain laws which have similar implications, with the San Francisco and Washington, DC, courts addressing only one law each. Anthropic asserts it is the first US firm subjected to these laws, typically aimed at penalizing foreign entities posing national security threats.

“Allowing a stay would compel the United States military to continue engaging with an undesired vendor of essential AI services during a critical military conflict,” wrote the three-judge appellate panel on Wednesday, characterizing this situation as unprecedented. They acknowledged that Anthropic might face financial repercussions from the designation but prioritized avoiding “significant judicial interference with military operations” or “lightly disregarding” military assessments regarding national security.

The San Francisco judge previously determined that the Department of Defense likely acted with bad faith towards Anthropic, influenced by displeasure over the AI firm’s proposed usage limits and public opposition to those restrictions. The judge ordered the removal of the supply-chain risk label last week, which the Trump administration followed by restoring access to Anthropic’s AI tools within the Pentagon and other federal agencies.

Anthropic spokesperson Danielle Cohen expressed gratitude that the Washington, DC, court “acknowledged the urgency for resolution” and conveyed confidence that “the courts will ultimately conclude that these supply chain designations were unlawful.”

The Department of Defense has not yet responded to a request for comment, but acting attorney general Todd Blanche shared a statement on X. “Today’s DC Circuit stay permitting the government to designate Anthropic as a supply-chain risk is a decisive victory for military readiness,” he stated. “Our position has been clear from the outset—our military requires complete access to Anthropic’s models if its technology is integrated into our sensitive systems. Military authority and operational control belong to the Commander-in-Chief and the Department of War, not to a tech firm.”

These cases are examining the extent of executive branch power over tech companies. The conflict between Anthropic and the Trump administration unfolds as the Pentagon leverages AI in its operations against Iran. The company argues it faces unlawful penalties for asserting that its AI tool, Claude, lacks the necessary accuracy for delicate tasks, including executing lethal drone strikes autonomously.

Several experts in government contracting and corporate rights have informed WIRED that Anthropic has a compelling case against the government, though courts occasionally refrain from overruling the White House in national security matters. Some AI researchers argue the Pentagon’s actions against Anthropic create a “chilling effect” on professional discussions regarding AI system performance.

Anthropic claims in court that it has lost business as a result of the designation, which government lawyers maintain prohibits the Pentagon and its contractors from incorporating the company’s Claude AI in military projects. As long as Trump remains in authority, Anthropic could struggle to regain its substantial presence within the federal government.

Final rulings in the company’s two lawsuits may take months to materialize. The Washington court is set to hear oral arguments on May 19.

The parties have disclosed few details regarding how the Department of Defense has utilized Claude or the extent of progress made in transitioning personnel to alternative AI tools from Google DeepMind, OpenAI, or others. The military, which under President Trump labels itself the Department of War, has stated that it is ensuring Anthropic cannot intentionally compromise its AI tools during the transition.

Update 4/8/26 7:27 EDT: This story has been revised to incorporate a statement from acting attorney general Todd Blanche.

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant