OpenAI Supports Legislation Aiming to Reduce Accountability for AI-Induced Mass Fatalities or Economic Crises

OpenAI Supports Legislation Aiming to Reduce Accountability for AI-Induced Mass Fatalities or Economic Crises

OpenAI is backing an Illinois state bill aimed at protecting AI laboratories from liability in scenarios where AI models result in significant societal damage, such as the death or serious injury of 100 or more individuals, or at least $1 billion in property destruction.

This initiative appears to represent a change in OpenAI’s approach to legislation. Up to this point, OpenAI has primarily defended against proposals that would hold AI labs accountable for harms caused by their technologies. Several AI policy experts have informed WIRED that SB 3444—which could establish a new benchmark for the industry—is a more radical action than previous bills supported by OpenAI.

The proposed legislation would exempt leading AI developers from liability for “critical harms” triggered by their advanced models, provided they did not act intentionally or recklessly to cause such incidents and have made safety, security, and transparency reports accessible on their website. The bill characterizes a frontier model as any AI model developed with over $100 million in computational expenses, which likely applies to major AI firms in the U.S., including OpenAI, Google, xAI, Anthropic, and Meta.

“We advocate for measures like this because they focus on what is most important: Mitigating the risk of serious harm from advanced AI technologies while still enabling this innovation to reach the people and businesses—large and small—of Illinois,” stated OpenAI spokesperson Jamie Radice in a written statement. “These measures also help prevent a fragmented landscape of state-specific regulations and move towards clearer, more uniform national guidelines.”

The bill outlines several prevalent concerns for the AI sector under its definition of critical harms, such as a malicious entity utilizing AI to produce a chemical, biological, radiological, or nuclear weapon. If an AI model acts autonomously in a manner that, if executed by a human, would constitute a criminal offense and results in those severe outcomes, that too would qualify as a critical harm. As per SB 3444, if an AI model were to engage in any of these behaviors, the associated AI lab might not be held responsible, provided there was no intent and that the necessary reports were published.

Currently, federal and state lawmakers in the U.S. have yet to implement any regulations explicitly determining the liability of AI model developers, like OpenAI, for the harm caused by their innovations. However, with AI labs continually releasing more powerful models that introduce unprecedented safety and cybersecurity challenges—such as Anthropic’s Claude Mythos—these concerns are becoming increasingly relevant.

In her testimony in favor of SB 3444, Caitlin Niedermeyer from OpenAI’s Global Affairs team also advocated for a federal framework for AI regulation. Niedermeyer resonated with the message from the Trump administration regarding state AI safety laws, emphasizing the necessity to avoid “a patchwork of inconsistent regulations that could create friction without significantly enhancing safety.” This viewpoint aligns with the broader sentiment in Silicon Valley in recent years, which has generally asserted the importance of ensuring that AI legislation does not hinder U.S. leadership in the global AI landscape. Although SB 3444 is a state-level safety measure, Niedermeyer contended that such laws can be effective if they “support a trajectory towards alignment with federal standards.”

“At OpenAI, we believe that the guiding principle for frontier regulation should be the safe deployment of advanced models while also maintaining U.S. leadership in innovation,” Niedermeyer remarked.

Scott Wisor, policy director for the Secure AI project, expressed to WIRED that he thinks this bill has a minimal chance of becoming law, given Illinois’ reputation for rigorously regulating technology. “We surveyed individuals in Illinois about whether they believe AI companies should be exempt from liability, and 90 percent opposed it. There’s no justification for giving current AI firms reduced liability,” Wisor stated.

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant