OpenAI and Google Employees Submit Amicus Brief Backing Anthropic in Case Against the US Government

Over 30 employees from OpenAI and Google, including Google DeepMind’s chief scientist Jeff Dean, submitted an amicus brief on Monday, expressing support for Anthropic in its legal battle against the US government.
“If this action proceeds, punishing one of the top US AI companies will certainly impact the country’s industrial and scientific competitiveness in artificial intelligence and related fields,” the employees stated.
The brief was filed shortly after Anthropic took legal action against the Department of Defense and other federal agencies regarding the Pentagon’s classification of the company as a “supply-chain risk.” This designation significantly restricts Anthropic’s capacity to collaborate with military contractors and was enacted following the breakdown of negotiations with the Pentagon. The AI startup aims to obtain a temporary restraining order to resume work with military partners while the lawsuit advances. This brief specifically backs that motion.
Among the signatories are Google DeepMind researchers Zhengdong Wang, Alexander Matt Turner, and Noah Siegel, in addition to OpenAI researchers Gabriel Wu, Pamela Mishkin, and Roman Novak, among others. Amicus briefs are legal documents submitted by individuals who are not parties to a case but possess relevant expertise. The employees signed the brief in a personal capacity and do not represent their companies’ views, as stated in the document.
OpenAI and Google did not provide immediate comments to WIRED.
The amicus brief highlights that the Pentagon’s choice to blacklist Anthropic “creates unpredictability in [their] industry, undermining American innovation and competitiveness” and “deters professional discourse on the advantages and risks of advanced AI systems.” It mentions that the Pentagon could have simply canceled Anthropic’s contract if it wished to terminate the agreement.
The brief further emphasizes that the red lines Anthropic claims to have requested, such as ensuring its AI wouldn’t be utilized for mass domestic surveillance or the development of autonomous lethal weaponry, are valid concerns that necessitate adequate safeguards. “In the absence of public law, the contractual and technological stipulations that AI developers enforce on the usage of their systems serve as crucial protection against catastrophic misuse,” the brief states.
Numerous other AI leaders have also voiced skepticism regarding the Pentagon’s decision to classify Anthropic as a supply-chain risk. OpenAI CEO Sam Altman remarked on social media that “upholding the SCR [supply-chain risk] label on Anthropic would be detrimental to our industry and nation.” He commented that “this decision from the DoD is misguided, and I hope they reconsider.” As Anthropic’s relationship with the Pentagon deteriorated, OpenAI swiftly enacted its own contract with the US military, a move some critics labeled as opportunistic.
