Trump Aims to Prohibit Anthropic’s Engagement with the U.S. Government

US President Donald Trump declared on Friday that he is directing all federal agencies to âimmediately ceaseâ the use of Anthropicâs AI tools. This decision follows weeks of disagreements between Anthropic and high-ranking officials regarding military uses of artificial intelligence.
“The Leftwing extremists at Anthropic have made a DISASTROUS MISTAKE trying to STRONG-ARM the Department of War,â Trump stated in a post on Truth Social.
Trump also mentioned that there would be a âsix month phase-out periodâ for agencies utilizing Anthropic, potentially allowing time for further discussions between the government and the AI startup.
Both the Pentagon and Anthropic did not immediately reply to requests for comments.
Shortly after the Presidentâs announcement, Defense Secretary Pete Hegseth asserted that Anthropic would be labeled as a âsupply chain risk,â a classification typically reserved for foreign entities deemed a threat to American national security. This designation will prevent the US military, along with its contractors and suppliers, from collaborating with the AI firm.
Hegseth also criticized Anthropic and its CEO, Dario Amodei, for the company’s unwillingness to comply with its demands. âCloaked in the self-righteous language of âeffective altruism,’ they have tried to strong-arm the United States military into submissionâa cowardly act of corporate virtue-signaling that prioritizes Silicon Valley ideology over American lives,â Hegseth expressed on X.
The Department of Defense has aimed to renegotiate the terms of an agreement made with Anthropic and other companies last July to lift restrictions on how AI can be utilized, allowing for âall lawful useâ of the technology. Anthropic opposed this modification, arguing it might enable AI to fully control lethal autonomous weapons or to conduct extensive surveillance on US citizens.
While the Pentagon does not currently deploy AI in these manners and has stated it has no intentions of doing so, prominent officials from the Trump administration have criticized the notion of a civilian tech company dictating military applications of such a critical technology.
Anthropic was the first significant AI lab to collaborate with the US military, following a $200 million contract signed with the Pentagon last year. It developed several specialized models known as Claude Gov, which have fewer constraints than its standard offerings. Around the same time, Google, OpenAI, and xAI entered into similar contracts, but Anthropic remains the only AI company actively engaged with classified systems.
Anthropicâs model is accessible through platforms provided by Palantir and Amazonâs cloud service for classified military operations. Claude Gov is primarily employed for routine tasks, such as drafting reports and summarizing documents, but it is also utilized for intelligence analysis and military strategy, according to a source familiar with the matter who spoke to WIRED on the condition of anonymity due to restrictions on public discussion.
In recent years, Silicon Valley has shifted from largely steering clear of defense projects to increasingly engaging with them, ultimately becoming full-fledged military contractors. The conflict between Anthropic and the Pentagon is now testing the boundaries of this transition. This week, several hundred employees from OpenAI and Google signed an open letter supporting Anthropic and criticizing their own companiesâ moves to remove limitations on military applications of AI.
In a memo to OpenAI staff today, CEO Sam Altman stated that the company sympathizes with Anthropic and also views mass surveillance and fully autonomous weapons as a âred line.â Altman noted that the company would seek to reach an agreement with the Pentagon that would allow it to continue its military collaborations, The Wall Street Journal reported.
