Advertisement

Mind announces year-long AI mental health review

The review will examine growing concerns about artificial intelligence (AI) in mental health support, citing cases of vulnerable people receiving harmful advice.

Mental health charity Mind has announced a year-long AI and Mental Health Commission aimed at addressing concerns the charity is seeing ‘on the frontline’ of mental health support.

Launched yesterday (23rd February), the review will explore how AI could improve access to information and care. It will also look at the risks when AI is used instead of therapy, crisis support or clinical guidance – a particular concern for people already vulnerable or struggling to get timely help.

News of the commission has come after Mind said it had seen a rising number of people seeking help after following inappropriate, misleading or even dangerous advice from AI platforms. 

Some have forced emotionally dependent or quasi-therapeutic relationships with AI tools not designed or regulated for mental health support. Others have acted on guidance that contradicts established best practice, sometimes with serious consequences. 

Separate research from 2025 shows one in four of 13- to 17-year-olds have used an AI chatbot for mental health support in the past year. Teenagers were also more likely to confide in AI if they were on a waiting list for treatment or diagnosis had been denied, than if they were already receiving in-person support. 

The commission will be informed by people with lived experience of mental health problems, clinicians, technologists, ethicists and policymakers. It will hold workshops, review case studies and consult the latest research. 

Findings are set to be published in regular reports, alongside practical recommendations for how AI should be safely developed ad deployed. 

Dr Sarah Hughes, chief executive of Mind, said: ‘We believe AI has enormous potential to improve the lives of people with mental health problems, widen access to support and strengthen public services. But that potential will only be realised if it is developed and deployed responsibly, with safeguards proportionate to the risks. 

‘We are already seeing examples of AI tools offering dangerously incorrect guidance on mental health, including advice that could prevent people from seeking treatment, reinforce stigma or, discrimination and in the worst cases, put lives at risk.’

In an Instagram post, Mind addressed the UK government and informed them of their latest project. 

‘AI platforms are sometimes giving poor and dangerous advice,’ the post reads. ‘And we’re seeing more people seek help as a result. There is an urgent need for stronger safeguards and shared accountability.

‘The UK has an opportunity to lead here. We need to show that AI and care are not opposing forces, but can work safely together. We would welcome the chance to discuss this further with you.’ 


Image: Adrian Swancar/UnSplash 

In related news:

Government to slash EHCPs in major SEND overhaul

Do the SEND reforms actually offer ‘a better education for every child’

Emily Whitehouse
Features Editor at New Start Magazine, Social Care Today and Air Quality News.
Help us break the news – share your information, opinion or analysis
Back to top