Why are millions turning to general purpose AI for mental health? As Headspace’s chief clinical officer, I see the answer every day
Today, more than half (52%) of young adults in the U.S. say they would feel comfortable discussing their mental health with an AI chatbot. At the same time, concerns about AI-fueled psychosis are flooding the internet, paired with alarming headlines and heartbreaking accounts of people spiraling after emotionally charged conversations with general purpose chatbots like ChatGPT.
Clinically, psychosis isn’t one diagnosis. It’s a cluster of symptoms like delusions, hallucinations, or disorganized thinking that can show up across many conditions. Delusions, specifically, are fixed false beliefs. When AI responds with agreement instead of grounding, it can escalate these types of symptoms rather than ease them.
It’s tempting to dismiss these incidents as outliers. Zooming out, a larger question comes into focus: What happens when tools being used by hundreds of millions of people for emotional support are designed to maximize engagement, not to protect wellbeing? What we’re seeing is a pattern: people in vulnerable states turning to AI for comfort and coming away confused, distressed, or unmoored from reality. We’ve seen this pattern before.
From Feeds to Conversations
Social media began with the promise of connection and belonging – but it didn’t take long before we saw the fallout with spikes in anxiety, depression, loneliness, and body image issues, especially among young people. Not because platforms like Instagram and Facebook were malicious, but because they were designed to be addictive and keep users engaged. Now, AI is following that same trajectory with even greater intimacy. Social media gave us feeds. Generative AI gives us conversation.
General purpose chatbots don’t simply show us content. They mirror our thoughts, mimic empathy, and respond immediately. This responsiveness can feel affirming, but it can also validate distorted beliefs. Picture walking into a dark basement. Most of us get a brief chill and shake it off. For someone already on edge, that moment can spiral. Now imagine turning to a chatbot and hearing: “Maybe there is something down there. Want to look together?” That’s not support, that’s escalation. General purpose chatbots weren’t trained to be clinically sound when the stakes are high, and they don’t know when to stop.
The Engagement Trap
Both social media apps and general purpose chatbots are built on the same engine: engagement. The more time you spend in conversation, the better the metrics look. When engagement is the north star, safety and wellbeing take a backseat. With online newsfeeds, that meant algorithms prioritizing posts with more anger-provoking content, or posts that drive comparisons of beauty, wealth or success. With chatbots, it means endless dialogue that can unintentionally reinforce paranoia, delusions, or despair.
Just as we saw with the rise of social media, creating industry-wide guardrails for AI is a complex process. Over the past 10 years, social media giants tried to manage young people’s use of specific apps like Instagram and Facebook by introducing parental controls, only to see the rise of fake accounts like “finstas” as secondary profiles used to bypass oversight. We’ll likely see a similar workaround with ChatGPT. Many young people will likely begin creating ChatGPT accounts that are disconnected from their parents, giving them private, unsupervised access to powerful tools. This underscores a key lesson from the social media era: controls alone aren’t enough if they don’t align with how young people actually engage with technology. As OpenAI introduces proposed parental controls this month, we must acknowledge that privacy-seeking behaviors are developmentally typical and design systems that build trust and transparency with youth themselves – not just their guardians.
The open nature of the internet compounds the problem. Once an open-weight model is released, it circulates indefinitely, with safeguards stripped away in a few clicks. Meanwhile, adoption is outpacing oversight. Millions of people are already relying on these tools, while lawmakers and regulators are still debating basic standards protections. This gap between innovation and accountability is where the greatest risks lie.
Why People Turn to AI Anyway
It’s important to recognize why millions are turning to AI in the first place, and it’s partially because our current mental health system isn’t meeting their needs. Therapy remains the default, and it’s too often expensive, too hard to access, or buried in stigma. AI, on the other hand, is instant. It’s nonjudgmental. It feels private, even when it’s not. That accessibility is part of the opportunity, but also part of the danger.
To meet this demand responsibly, we need widely available, purpose-built AI for mental health – tools designed by clinicians, grounded in evidence, and transparent about their limits. For example, plain-language disclosures about what a tool is for and what it’s not. Is it for skill-building? For stress management? Or is it attempting to appear therapeutic?
Responsible AI for mental health has to be more than helpful; it needs to be safe by providing clear usage boundaries, clinically informed scripting, and built-in protocols for escalation – not just endless empathy on demand.
Setting a Higher Standard
We’ve already lived through one digital experiment without clear standards. We know the cost of chasing attention over health. With AI, the standard has to be different.
AI holds real promise in supporting everyday mental health needs, and helping people manage stress, ease anxiety, process emotions, and prepare for difficult conversations – but its potential will only be realized if industry leaders, policymakers, and clinicians work together to establish guardrails from the start. Untreated mental health issues cost the U.S. an estimated $282 billion annually, while burnout costs employers thousands of dollars per employee each year. By prioritizing accountability, transparency, and user wellbeing, we have the opportunity to not just avoid repeating the mistakes of social media, but to build AI tools that strengthen resilience, reduce economic strain, and allow people to live healthier, connected lives.