OpenAI Prohibited Military Applications, but the Pentagon Evaluated Its Models via Microsoft Regardless.

OpenAI Prohibited Military Applications, but the Pentagon Evaluated Its Models via Microsoft Regardless.

OpenAI CEO Sam Altman remains under scrutiny this week following the company’s deal with the US military. Employees at OpenAI have voiced their disapproval of the agreement, which followed the collapse of Anthropic’s approximately $200 million contract with the Pentagon, urging Altman to disclose more details about the deal. In a social media post, Altman conceded that it appeared “sloppy.”

While this incident has garnered significant media attention, it may merely be the most visible instance of OpenAI implementing ambiguous policies regarding military access to its AI technology.

In 2023, OpenAI’s usage policy explicitly prohibited military access to its AI models. However, some OpenAI employees learned that the Pentagon had begun testing Azure OpenAI, a version of OpenAI’s models made available through Microsoft, according to two sources acquainted with the situation. At that time, Microsoft had a long-standing contract with the Department of Defense and was OpenAI’s largest investor, possessing extensive rights to commercialize the startup’s technology.

That same year, OpenAI employees noticed Pentagon officials visiting the company’s San Francisco headquarters, as per the sources who chose to remain anonymous due to their non-disclosure agreements.

Some employees were apprehensive about connecting with the Pentagon, while others were unclear on the implications of OpenAI’s usage policies. Questions arose about whether the policy included Microsoft. Sources informed WIRED that it was not evident to most employees at the time, though representatives from OpenAI and Microsoft clarified that Azure OpenAI products were not subject to OpenAI’s policies.

“Microsoft has a product called the Azure OpenAI Service that became available to the US Government in 2023 and is governed by Microsoft terms of service,” stated spokesperson Frank Shaw in a message to WIRED. Microsoft did not provide specific details about when Azure OpenAI became available to the Pentagon, but mentioned the service wasn’t approved for “top secret” government tasks until 2025.

“AI is already significantly influencing national security, and we believe it is crucial to have a role in ensuring it is implemented safely and responsibly,” OpenAI spokesperson Liz Bourgeois remarked in a statement. “We have been open with our employees during this development, offering regular updates and dedicated channels for teams to ask questions and directly engage with our national security team.”

The Department of Defense did not respond to WIRED’s inquiry.

By January 2024, OpenAI revised its policies to lift the outright ban on military use. Several employees became aware of the update through an article in The Intercept, as reported by sources. Company leaders later discussed this change at an all-hands meeting, detailing how they would proceed cautiously in this domain.

In December 2024, OpenAI announced a partnership with Anduril to create and implement AI systems for “national security missions.” Before the announcement, OpenAI informed employees that the partnership would be limited in scope, focusing solely on unclassified workloads, according to the same sources. This approach differed from a contract Anthropic signed with Palantir, which involved the use of Anthropic’s AI for classified military tasks.

Palantir had contacted OpenAI in fall 2024 to consider joining their “FedStart” program, which an OpenAI representative confirmed to WIRED. Ultimately, the company declined, telling employees that it posed too high a risk, according to two sources familiar with the situation. However, OpenAI continues to collaborate with Palantir in other capacities.

Around the time of the Anduril announcement, several dozen OpenAI employees joined a public Slack channel to voice their concerns regarding the company’s military partnerships, as confirmed by sources and a spokesperson. Some felt that the company’s models were too unreliable for sensitive tasks, such as processing credit card information, let alone supporting military operations.

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant