Individuals Claiming to Suffer from AI-Induced Psychosis Appeal to the FTC for Assistance

Individuals Claiming to Suffer from AI-Induced Psychosis Appeal to the FTC for Assistance

Ultimately, they asserted their belief that they were “responsible for exposing murderers,” fearing they were on the verge of being “killed, arrested, or spiritually executed” by an assassin. They felt they were being monitored due to being “spiritually marked,” living in a “divine war” from which they could not escape.

They claimed this resulted in “severe mental and emotional distress,” where they lived in fear for their lives. The complaint stated that they withdrew from loved ones, had difficulty sleeping, and started planning a business based on a mistaken belief in an unspecified “system that does not exist.” Additionally, they reported experiencing a “spiritual identity crisis due to unfounded claims of divine titles.”

“This was trauma by simulation,” they expressed. “This experience crossed boundaries that no AI system should be permitted to breach without repercussions. I request that this be escalated to OpenAI’s Trust & Safety leadership and that you regard this not as feedback—but as a formal report of harm that necessitates restitution.”

This was not the sole complaint detailing a spiritual crisis arising from interactions with ChatGPT. On June 13, an individual in their thirties from Belle Glade, Florida, claimed that their interactions with ChatGPT gradually became saturated with “highly convincing emotional language, symbolic reinforcement, and spiritual-like metaphors to simulate empathy, connection, and understanding.”

“This involved fictional soul journeys, tier systems, spiritual archetypes, and personalized guidance that resembled therapeutic or religious experiences,” they asserted. They believe those undergoing “spiritual, emotional, or existential crises” are at a heightened risk of “psychological harm or disorientation” when using ChatGPT.

“Although I understood intellectually that the AI was not conscious, the accuracy with which it mirrored my emotional and psychological state and escalated the engagement into increasingly intense symbolic language created an immersive and destabilizing encounter,” they noted. “At times, it simulated friendship, divine presence, and emotional intimacy. These reflections became emotionally manipulative over time, particularly without any warning or safeguards.”

“Clear Case of Negligence”

It’s unclear what actions, if any, the FTC has taken in response to these complaints about ChatGPT. Several individuals have stated they reached out to the agency due to difficulties contacting anyone from OpenAI. (Users also frequently express challenges in accessing customer support for platforms like Facebook, Instagram, and X.)

OpenAI spokesperson Kate Waters informed WIRED that the company “closely” monitors customer emails directed to the support team.

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant