The Ex-Employee Challenging OpenAI’s Allegations About Erotica

The Ex-Employee Challenging OpenAI's Allegations About Erotica

As the narrative of AI unfolds, Steven Adler might just emerge as its Paul Revere—or at least, one of its trailblazers—when it comes to safety considerations.

Recently, Adler, who dedicated four years to safety roles at OpenAI, penned an article for The New York Times with a striking title: “I Led Product Safety at OpenAI. Don’t Trust Its Claims About ‘Erotica.’” In this piece, he highlighted the challenges OpenAI encountered in allowing users to engage in erotic conversations with chatbots while safeguarding their mental well-being. “Nobody wanted to assume the role of the morality police, but we didn’t have effective ways to measure and oversee erotic usage,” he stated. “We determined that AI-driven erotica would need to be postponed.”

Adler wrote his op-ed in reaction to an announcement from OpenAI CEO Sam Altman, who revealed that the company planned to permit “erotica for verified adults.” In his response, Adler expressed that he had “significant concerns” about whether OpenAI had adequately addressed, in Altman’s words, “mitigate” the mental health risks linked to user interactions with its chatbots.

After reading Adler’s article, I was eager to engage with him. He kindly accepted an invitation to visit the WIRED offices in San Francisco, where on this episode of The Big Interview, he shares insights from his four years at OpenAI, his views on the future of AI safety, and the challenges he proposes to companies developing chatbots.

This interview has been edited for length and clarity.

KATIE DRUMMOND: Before we start, I want to clarify a couple of things. First, you are, unfortunately, not the Steven Adler who played drums for Guns N’ Roses, correct?

STEVEN ADLER: That’s absolutely correct.

Got it. And second, you’ve had an extensive career in technology, particularly in the realm of artificial intelligence. So, before we dive in, could you share a bit about your career and your background, as well as what you’ve focused on?

I have experience across the AI sector, with a specific focus on safety aspects. Most recently, I spent four years at OpenAI, dealing with virtually every facet of the safety challenges: How can we enhance products for users while eliminating current risks? Additionally, how can we anticipate if AI systems are becoming significantly dangerous?

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant