I want to be careful with this post. Mental health is a space where tech enthusiasm can cause real harm, and where the usual "move fast and break things" mindset would be genuinely dangerous. People's wellbeing is not a feature flag.
With that said: there are places where AI is helping, meaningfully. And there are practitioners thinking hard about how to do this ethically. So let me share what I've seen, carefully.
What's helping, used well
Access. The most meaningful impact of AI in mental health so far is that it's lowered the barrier to first-line support. Someone with mild anxiety, who would never make a GP appointment and certainly wouldn't call a helpline, might talk to a well-designed AI companion app. Many of them then take the second step to professional support. The AI isn't replacing care; it's getting more people onto the ladder.
Between-session support. Therapy happens for an hour a week, but mental health is twenty-four hours a day. Modern CBT-informed chatbots can help someone practise skills between sessions — noticing thought patterns, running through a grounding exercise, journaling after a hard moment. Therapists I've talked to increasingly see these as useful complements to their work.
Screening and triage. AI models trained on speech patterns, linguistic cues, or sleep and activity data can spot early signs of depression, PTSD, or cognitive decline — often before the person themselves notices. The best deployments I've seen are in primary-care contexts, where the AI flags a patient for a gentle conversation with a human clinician.
Crisis-line support for staff. Human crisis-line volunteers burn out. AI tools that assist volunteers — suggesting resources, flagging escalation indicators in real time, supporting with language on the hardest calls — are keeping those services running. The volunteers remain in the chair; the AI helps them stay there.
Therapist training. Some training programmes now use AI-driven simulated clients for therapists-in-training to practise on. You can't practise on real people early in your career without supervision; the simulation provides a safe, varied, low-stakes alternative.
What I'm worried about
"AI therapist" products that aren't regulated as health tools. A chatbot that markets itself as a substitute for therapy, without any clinical supervision or regulatory oversight, is a harm waiting to happen. Some of these products have already caused measurable damage.
Crisis response going wrong. An AI system that handles a mention of suicide badly — either dismissing it, or, in one horrible case I know of, encouraging it — can be catastrophic. Any AI product that might be used in a crisis moment needs clinical involvement from design through deployment, not as an afterthought.
Data privacy. Mental health data is some of the most sensitive data there is. I've seen apps with privacy policies that allow selling aggregated data to advertisers. This is not acceptable, full stop. If you're a user, check. If you're a builder, don't.
Over-confident diagnosis. Models that claim to diagnose mental health conditions from voice recordings, selfies, or social media posts are deeply problematic. Even when the underlying signal is real, the deployment context rarely respects the complexity of a diagnostic conversation. This is a place where "AI-assisted" is the right framing, never "AI-driven."
The loneliness trap. There's a real, growing phenomenon of people forming attachments to AI companions in ways that substitute for human connection. For some lonely people, this seems to be net positive. For others, it's deepening isolation. We don't yet fully understand the long-term effects, and there are reasons to be cautious.
A test I apply when looking at mental-health AI products
A few questions I ask:
- Is a clinician involved in the product team? Not as an adviser. As a decision-maker.
- What happens when the user mentions suicide or self-harm? Watch the actual response. Is there a clear, human hand-off path?
- How is the training data handled? And how is the user's ongoing data handled?
- What's the evidence base? Has the product been evaluated in a clinical trial, or at least against an established measure?
- What's the regulatory posture? In the UK, is it registered with the MHRA? In the US, with the FDA? Or is it marketed as "wellness" to dodge the question?
If you can't get clear answers to these, that's a strong signal.
If you're building in this space
- Find a clinician and give them real power. Not an advisory board. Someone on the team whose "no" actually means no.
- Design for hand-off, not dependence. Your goal is to get users into appropriate human care, not to maximise engagement with your product.
- Publish your safety evaluations. The field benefits from transparency.
- Make crisis resources one tap away, always. Visible. Local. Human.
A quiet hope
I know a clinical psychologist who's been working with one of the better-designed AI companion apps as a complement to her own practice. She's told me that she's seeing patients work through things between sessions that used to wait a week. Not revolutionary things — the small work of noticing a pattern, practising a coping strategy, writing something down.
Her patients are getting better faster. Her practice is serving more people. That's a version of this technology I can wholeheartedly get behind.
If you're working on something in this space and would like a careful sounding board, we'd love to help you think. This is one we take seriously.
If you're struggling right now: please reach out to a mental health professional or a local crisis service. In the UK, the Samaritans (116 123) are available around the clock. In the US, 988 reaches the Suicide & Crisis Lifeline. You deserve human care.