
Psychosis, mania and depression are hardly new issues, but experts fear A.I. chatbots may be making them worse. With data suggesting that large portions of chatbot users show signs of mental distress, companies like OpenAI, Anthropic, and Character.AI are starting to take risk-mitigation steps at what could prove to be a critical moment.
This week, OpenAI released data indicating that 0.07 percent of ChatGPT’s 800 million weekly users display signs of mental health emergencies related to psychosis or mania. While the company described these cases as “rare,” that percentage still translates to hundreds of thousands of people.
In addition, about 0.15 percent of users—or roughly 1.2 million people each week—express suicidal thoughts, while another 1.2 million appear to form emotional attachments to the anthropomorphized chatbot, according to OpenAI’s data.
Is A.I. worsening the modern mental health crisis or simply revealing one that was previously hard to measure? Studies estimate that between 15 and 100 out of every 100,000 people develop psychosis annually, a range that underscores how difficult the condition is to quantify. Meanwhile, the latest Pew Research Center data shows that about 5 percent of U.S. adults experience suicidal thoughts—a figure higher than in earlier estimates.
OpenAI’s findings may hold weight because chatbots can lower barriers to mental health disclosure, bypassing obstacles such as cost, stigma, and limited access to care. A recent survey of 1,000 U.S. adults found that one in three A.I. users has shared secrets or deeply personal information with their chatbot.
OpenAI’s findings may hold weight because chatbots can lower barriers to mental health disclosure, such as perceived shame and access to care. A recent survey of 1,000 U.S. adults found that one in three A.I. users has shared secrets and deeply personal information with their A.I. chatbot.
Still, chatbots lack the duty of care required of licensed mental health professionals. “If you’re already moving towards psychosis and delusion, feedback that you got from an A.I. chatbot could definitely exacerbate psychosis or paranoia,” Jeffrey Ditzell, a New York-based psychiatrist, told Observer. “A.I. is a closed system, so it invites being disconnected from other human beings, and we don’t do well when isolated.”
“I don’t think the machine understands anything about what’s going on in my head. It’s simulating a friendly, seemingly qualified specialist. But it isn’t,” Vasant Dhar, an A.I. researcher teaching at New York University’s Stern School of Business, told Observer.
“There’s got to be some sort of responsibility that these companies have, because they’re going into spaces that can be extremely dangerous for large numbers of people and for society in general,” Dhar added.
What A.I. companies are doing about the issue
Companies behind popular chatbots are scrambling to implement preventative and remedial measures.
OpenAI’s latest model, GPT-5, shows improvements in handling distressing conversations compared with previous versions. A small third-party community study confirmed that GPT-5 demonstrated a marked, though still imperfect, improvement over its predecessor. The company has also expanded its crisis hotline recommendations and added “gentle reminders to take breaks during long sessions.”
In August, Anthropic announced that its Claude Opus 4 and 4.1 models can now end conversations that appear “persistently harmful or abusive.” However, users can still work around the feature by starting a new chat or editing previous messages “to create new branches of ended conversations,” the company noted.
After a series of lawsuits related to wrongful death and negligence, Character.AI announced this week that it will officially ban chats for minors. Users under 18 now face a two-hour limit on “open-ended chats” with the platform’s A.I. characters, and a full ban will take effect on Nov. 25.
Meta AI recently tightened its internal guidelines that had previously allowed the chatbot to produce sexual roleplay content—even for minors.
Meanwhile, xAI’s Grok and Google’s Gemini continue to face criticism for their overly agreeable behavior. Users say Grok prioritizes agreement over accuracy, leading to problematic outputs. Gemini has drawn controversy after the disappearance of Jon Ganz, a Virginia man who went missing in Missouri on April 5 following what friends described as extreme reliance on the chatbot. (Ganz has not been found.)
Regulators and activists are also pushing for legal safeguards. On Oct. 28, Senators Josh Hawley (R-Mo.) and Richard Blumenthal (D-Conn.) introduced the Guidelines for User Age-verification and Responsible Dialogue (GUARD) Act, which would require A.I. companies to verify user ages and prohibit minors from using chatbots that simulate romantic or emotional attachment.
