A Key Researcher Involved in ChatGPT’s Mental Health Initiatives Is Departing OpenAI

A leader in OpenAI’s safety research, who played a crucial role in shaping ChatGPT’s responses for users facing mental health challenges, announced her internal departure from the company last month, according to WIRED. Andrea Vallone, who heads a safety research team known as model policy, is set to leave OpenAI by the end of the year.
OpenAI spokesperson Kayla Wood confirmed Vallone’s upcoming exit. Wood stated that OpenAI is actively searching for a successor, and in the meantime, Vallone’s team will report directly to Johannes Heidecke, the head of safety systems at the company.
Vallone’s departure coincides with increased scrutiny on how OpenAI’s flagship product addresses users in distress. Recently, several lawsuits have been filed against OpenAI, alleging that users developed unhealthy attachments to ChatGPT. Some lawsuits claim that ChatGPT may have contributed to mental health crises or incited suicidal thoughts.
In light of this scrutiny, OpenAI has been striving to understand the appropriate ways for ChatGPT to respond to distressed users and enhance the chatbot’s replies. The model policy team is at the forefront of this initiative, producing an October report detailing the company’s advancements and consultations with over 170 mental health professionals.
The report indicated that hundreds of thousands of ChatGPT users might show signs of experiencing a manic or psychotic crisis weekly, with over a million individuals engaging in conversations that include explicit signs of potential suicidal planning or intent. Through an update to GPT-5, OpenAI reported achieving a reduction of undesirable responses in these conversations by 65 to 80 percent.
“Over the past year, I led OpenAI’s research on a topic with minimal established precedents: how should models respond when facing signs of emotional over-reliance or early indicators of mental health distress?” Vallone wrote in a LinkedIn post.
Vallone did not reply to WIRED’s request for comments.
Striking a balance between making ChatGPT engaging and avoiding excessive flattery is a fundamental challenge at OpenAI. The company is aggressively seeking to grow ChatGPT’s user base, which exceeds 800 million people weekly, in order to compete with AI chatbots from Google, Anthropic, and Meta.
After the release of GPT-5 in August, users expressed dissatisfaction, arguing that the new model seemed unexpectedly cold. In the recent update to ChatGPT, the company announced that it had significantly diminished sycophancy while preserving the chatbot’s “warmth.”
Vallone’s departure follows an August reorganization of another team focused on ChatGPT’s responses to distressed users, known as model behavior. Joanne Jang, the former leader of that group, left her position to initiate a new team centered on innovative human–AI interaction methods. The remaining staff in model behavior were transferred under the leadership of post-training lead Max Schwarzer.
