OpenAI’s Teen Safety Tools Will Tread a Fine Line

OpenAI has introduced new teen safety features for ChatGPT on Tuesday as part of its ongoing commitment to address concerns regarding minors’ interactions with chatbots. The company is developing an age-identification system that determines if a user is under 18 years old and redirects them to an “age-appropriate” system that prevents access to graphic sexual content. Should the system identify that a user is contemplating suicide or self-harm, it will notify the user’s parents. In emergency situations where the parents cannot be contacted, authorities may be alerted.
In a blog post regarding the announcement, CEO Sam Altman stated that the company is striving to balance freedom, privacy, and the safety of teens.
“We recognize that these principles can conflict, and not everyone will agree with our solutions,” Altman remarked. “These are tough choices, but after discussions with experts, this approach feels right and we want to be clear about our intentions.”
While OpenAI generally emphasizes privacy and freedom for adult users, it asserts that the safety of teens is its top priority. By the end of September, they will launch parental controls enabling parents to link their child’s account to their own, facilitating management of conversations and the ability to disable certain features. Additionally, parents will receive alerts when “the system detects their teen is experiencing a moment of acute distress,” according to the company’s blog, and can set limitations on when their children can access ChatGPT.
These initiatives arise amidst alarming reports of individuals taking their own lives or resorting to violence against family members following extended conversations with AI chatbots. Lawmakers are paying attention, and both Meta and OpenAI are facing scrutiny. Earlier this month, the Federal Trade Commission requested information from Meta, OpenAI, Google, and other AI companies regarding how their technologies affect children, as reported by Bloomberg.
Simultaneously, OpenAI is subject to a court ruling requiring it to retain consumer chats indefinitely—a situation the company is reportedly dissatisfied with, according to insider sources. Today’s announcement represents a significant step in safeguarding minors, while also serving as a strategic PR effort to emphasize that interactions with chatbots are so personal that privacy should only be compromised under the most critical circumstances.
“A Sexbot Avatar in ChatGPT”
From conversations I’ve had with sources at OpenAI, the responsibility of protecting users significantly weighs on many researchers. They aspire to deliver a user experience that is enjoyable and engaging, but it can quickly spiral into excessive flattery. It’s encouraging that companies like OpenAI are making strides to safeguard minors. However, without federal regulations, there is nothing mandating these firms to act ethically.
In a recent interview, Tucker Carlson pressed Altman on exactly who is responsible for making these impactful decisions. Altman pointed to the model behavior team, which focuses on refining the model for specific traits. “The person you should hold accountable for those decisions is me,” Altman asserted. “I’m a public face. Ultimately, I can either support or overturn one of those choices or the board’s decisions.”