Wed. Oct 8th, 2025

NHS Mental Health Demand Surges as NCPS Warns of Unregulated AI Therapy Risks


Reading Time: 3 minutes

In 2023–24, 3,790,826 people were in contact with NHS mental health, learning disability and autism services: an increase of nearly 40% since before the pandemic. At the same time, NHS data suggests 16,522 people have now waited over 18 months for mental health treatment. This is more than eight times the number of people waiting that long for elective physical health interventions.

With pressure on services high, many are turning to digital tools (there are between 10,000 and 20,000 mental health apps currently available globally), yet the evidence base for many apps is weak. There are a number of recent research papers outlining the risks of AI “therapy” bots, for example, and little out there to support efficacy, especially in the longer term.

Similarly, a review of large language models in mental health cautioned they should not be considered substitutes for professional support, citing concerns about reliability, ethics, ‘black box’ decision-making, and overreliance.

Many mental health apps do operate without full transparency or safety mechanisms, though. For example, a systematic review found that more than 55% of apps claiming they offered some kind of therapeutic service referenced an evidence-based framework, but only a small fraction had published efficacy evidence. Practically speaking, people don’t tend to use these apps for long, either: one study estimated only 3% of users continue using a mental health app after 30 days.

Safeguarding the future

The NCPS blueprint sets out six clear principles for safe AI use in mental health: AI interventions must be time-bound, supportive not directive, adjunctive to therapy (not a replacement), transparent about their limits, user-autonomous, and safeguarded with clear escalation to human help.

“People are already using AI to help with their mental health, and we’re seeing the negative effects of that starting to show. These safeguards are about making sure that these apps, chatbots, and services are created thoughtfully and ethically in a way that will genuinely help people,” said Meg Moss, Head of Public Affairs & Advocacy at NCPS. “AI can help with a great many things, but it can never replace the safety, trust, and human connection that only people can provide”.

AI can offer psychoeducation, journaling prompts, or general advice, but it lacks the relational depth (trust, reciprocity, attunement) that forms the basis of effective therapy.

Evidence from therapy

Counselling research consistently shows that the therapeutic relationship (the bond of trust and collaboration between client and therapist) is the strongest predictor of positive outcomes, more so than any specific method or technique.

Recent academic commentaries echo this concern. Stanford University researchers have warned that therapy chatbots may fall short of real therapy and in some cases risk reinforcing stigma or harmful outcomes. A new paper, Technological folie à deux, highlights worrying feedback loops where chatbots and users can enter unhealthy cycles that exacerbate delusional thinking or dependency.

Call to action

“If AI is going to play a role in mental health, it must be based on the robust therapeutic principles that have kept people safe for decades, and it mustn’t be at the cost of those important, human relationships that anchor us to both who we are and who we could be. There’s a lot of money and hype in the AI mental health space, and we should be careful to stay circumspect, especially where mental health is concerned,” Moss said.

There is no regulation of AI or digital mental health support tools, which means they do not need to adhere to any ethical frameworks, codes of conduct, safeguarding guidelines or similar. The Online Safety Act, which some people believe can provide a source of accountability to tech companies involved in the creation of AI mental health tools, was never designed to deal with these issues. It regulates harmful content on platforms, not the safety of therapeutic conduct. Without a dedicated regulatory framework, people remain unprotected, and the only way forward is bespoke regulation and an ethical framework, based upon these principles and others such as data protection, that treats these tools with the same seriousness as other forms of healthcare intervention.

The NCPS safeguards have already been shared with major organisations and charities, including NHS Talking Therapies. The Society is calling on developers, policymakers, and funders to adopt these principles as a baseline for safe and ethical innovation in digital mental health.

Related Post

Leave a Reply

Your email address will not be published. Required fields are marked *