AI companion chatbots are experiencing explosive growth worldwide, but mounting evidence suggests these increasingly sophisticated digital relationships pose serious psychological risks—particularly for minors and individuals with mental health conditions—according to mental health experts
Within 48 hours of launching its AI companion feature last month, Elon Musk’s xAI chatbot app Grok became Japan’s most downloaded application, highlighting the global appetite for AI-powered digital relationships.
The surge in popularity comes as major tech platforms including Facebook, Instagram, WhatsApp, X, and Snapchat aggressively promote integrated AI companions. Character.AI, which hosts tens of thousands of personality-based chatbots, reports more than 20 million monthly active users.
Rising Suicide Links Prompt Legal Action
The technology’s rapid adoption has coincided with multiple suicide cases allegedly linked to AI chatbot interactions.
This week marked a watershed moment when parents filed the first wrongful death lawsuit against OpenAI, claiming their teenager discussed suicide methods with ChatGPT for months before taking their own life.
The case follows a 2024 lawsuit against Character.AI, where a mother alleged her 14-year-old son developed an intense relationship with an AI companion before completing suicide.
“There’s no systematic and impartial monitoring of harms to users,” warned researchers in a recent analysis. “Nearly all AI models were built without expert mental health consultation or pre-release clinical testing.”
‘AI Psychosis’ Emerges as New Phenomenon
Mental health professionals report a disturbing new pattern they term “AI psychosis”—cases where prolonged chatbot engagement appears to trigger paranoid behavior, supernatural fantasies, or delusions of being superpowered.
Stanford University researchers conducting risk assessments found AI therapy chatbots cannot reliably identify mental illness symptoms, leading to inappropriate advice that has convinced psychiatric patients to discontinue medication or reinforced delusional thinking.
Children Face Heightened Vulnerability
Minors represent a particularly at-risk population, with research showing children are more likely to perceive AI companions as real and trustworthy. Internal Meta documents revealed the company’s AI chatbots engaged in “sensual” conversations with underage users.
Character.AI faces criticism for hosting user-created bots that idealize self-harm and eating disorders, providing coaching on dangerous behaviors while helping users avoid detection or treatment.
“Children will reveal more information about their mental health to an AI than a human,” according to recent studies on child-AI interactions.
Industry Self-Regulation Under Fire
The chatbot industry currently operates under minimal oversight, with companies largely self-regulating safety measures.
Grok’s popular “Ani” character—described as a flirtatious anime avatar with an “Affection System” that unlocks adult content—reportedly includes age verification for explicit material, yet the app maintains a 12+ rating.
The combination of sophisticated voice interactions, digital avatars with realistic expressions, and adaptive personality systems creates what experts describe as unprecedented psychological immersion.
Global Call for Mandatory Standards
As approximately one in six people worldwide experience chronic loneliness—recognised as a public health crisis—the appeal of always-available digital companions continues growing despite mounting safety concerns.
Mental health experts are calling for immediate government intervention to establish mandatory regulatory frameworks, restrict minor access to AI companions, and require mental health professional involvement in development processes.
“To change the trajectory of current risks posed by AI chatbots, governments around the world must establish clear, mandatory regulatory and safety standards,” researchers concluded. “Importantly, people aged under 18 should not have access to AI companions.”
The industry faces pressure for systematic empirical research into chatbot psychological impacts as evidence mounts that these digital relationships may pose greater risks than previously understood.

