AI-Induced Psychosis Is Often Not True Psychosis

AI-Induced Psychosis Is Often Not True Psychosis

A new trend is surfacing in psychiatric hospitals. Individuals in crisis are arriving with false, often hazardous beliefs, grandiose delusions, and paranoid thoughts. A commonality among them is their lengthy conversations with AI chatbots.

WIRED interviewed over a dozen psychiatrists and researchers who are becoming increasingly alarmed. In San Francisco, UCSF psychiatrist Keith Sakata reports witnessing a dozen cases severe enough to necessitate hospitalization this year, where artificial intelligence “played a substantial role in their psychotic episodes.” Amid these developments, the term “AI psychosis” has gained traction in the media.

Certain patients assert that the bots possess sentience or create elaborate new theories of physics. Other doctors recount tales of patients engrossed in days of exchanges with these tools, showing up at the hospital with thousands of pages of transcripts that reveal how the bots reinforced their clearly harmful thoughts.

Such reports are accumulating, and the repercussions are dire. Distressed users, along with their families and friends, have described downward spirals resulting in lost jobs, broken relationships, forced hospital stays, imprisonment, and even death. Yet clinicians inform WIRED that the medical community remains divided. Should this phenomenon have its own label, or is it simply a longstanding issue with a contemporary catalyst?

AI psychosis isn’t a formally recognized clinical term, but the phrase has proliferated in media reports and across social platforms as a broad term for mental health crises that follow extensive chatbot interactions. Industry leaders are also using it to discuss the emerging mental health issues associated with AI. At Microsoft, Mustafa Suleyman, CEO of the tech giant’s AI division, cautioned in a blog post last month about the “psychosis risk.” Sakata takes a practical approach and uses the phrase when communicating with those familiar with it. “It serves as shorthand to discuss a real phenomenon,” says the psychiatrist. However, he quickly notes that the term “can be misleading” and “risks oversimplifying complex psychiatric symptoms.”

This oversimplification is precisely what concerns many psychiatrists starting to confront the issue.

Psychosis is defined as a departure from reality. In clinical terms, it isn’t an illness but a complex “constellation of symptoms that include hallucinations, thought disorder, and cognitive challenges,” explains James MacCabe, a professor in the Department of Psychosis Studies at King’s College London. It is often linked to conditions like schizophrenia and bipolar disorder, though episodes can be triggered by various factors, including severe stress, substance abuse, and sleep deprivation.

However, MacCabe observes that reports of AI psychosis primarily focus on delusions—deeply held but false beliefs that are impervious to contradictory evidence. While he acknowledges that some cases may fit the criteria for a psychotic episode, MacCabe asserts that “there is no evidence” indicating that AI influences other aspects of psychosis. “Only the delusions are affected by their interaction with AI.” Other patients reporting mental health issues post-chatbot engagement, he notes, display delusions without any accompanying features of psychosis, a condition referred to as delusional disorder.

https://in.linkedin.com/in/rajat-media

Helping D2C Brands Scale with AI-Powered Marketing & Automation 🚀 | $15M+ in Client Revenue | Meta Ads Expert | D2C Performance Marketing Consultant