POST (Parliamentary Office of Science and Technology)
Printable version

AI and Mental Healthcare – ethical and regulatory considerations

This POSTnote summarises the ethical implications and regulatory considerations for deploying Artificial Intelligence (AI) in mental healthcare.

Documents to download

Overview

In recent years the number of AI tools available for mental healthcare and wellbeing purposes has increased. This builds from a burgeoning digital health sector in which 20,000+ wellbeing apps are reportedly available on the app store. These apps are distinct both from AI tools which have been purpose-built for NHS use, and from unintended uses of companion chatbot apps – which were never intended for mental health purposes. All the cases of severe harm identified through this research were from unintended uses of general companion chatbot apps. But there are ethical considerations around the use of all AI tools in mental healthcare.

Public sector responses are underway to improve data availability and support improvement in evidence generation and deployment. There are also collaborative responses underway to address the ethical challenges from multiple government agencies in the UK and globally. This builds on considerable existing regulation and guidance (examples are outlined in the POSTnote).  

For more on tools currently being trialed, the opportunities they offer, and considerations for delivery, see PN737.

Key points  

Trials and use of AI tools for mental healthcare and wellbeing purposes are widespread. Although they offer many opportunities, there are ethical and regulatory concerns about their use. Regulatory responses are underway in the UK and globally. 

Ethical concerns include the potential for harm to the public, perpetuating bias, data protection and privacy concerns, and questions around transparency, accountability, and liability.  

There are mixed views on inclusion and exclusion, where AI and other digital tools have potential to exclude some service users whilst increasing accessibility for others.  

Contributors and evidence suggested the quality of the evidence base behind these tools needs to be improved. Addressing the quality and availability of data is suggested necessary to support both this improvement and effective deployment. 

Recently developed AI technologies (particularly Generative AI) function differently to previous technologies and therefore present novel ethical and regulatory challenges. 

Acknowledgements 

POSTnotes are based on literature reviews and interviews with a range of stakeholders and are externally peer reviewed. POST would like to thank interviewees and peer reviewers for kindly giving up their time during the preparation of this briefing, including: 

Members of the POST board*

Aynsley Bernard, Kooth 

Dr Graham Blackman, University of Oxford 

Professor Adriane Chapman, The Governance in AI Research Group (GAIRG)* 

Claudia Corradi, The Nuffield Council on Bioethics 

Dr David Crepaz-Keay, the Mental Health Foundation* 

Fiona Dawson, Mayden 

Zoe Devereux, University of Birmingham* 

Dr Piers Gooding, La Trobe University 

Dr Caroline Green, University of Oxford 

Lara Groves, Ada Lovelace Institute 

James Heard, The Governance in AI Research Group (GAIRG)* 

Dr Gareth Hopkin, Science Policy and Research Programme Team, National Institute for Health and Care Excellence (NICE)* 

Dr Becky Inkster, Cambridge University 

Dr Grace Jacobs, Kings College London* 

Lauren Jerome, Queen Mary University of London 

Dr Indra Joshi, Trustee for Lift Schools 

Dr Andrey Kormilitzin, University of Oxford 

Associate Professor Akshi Kumar, Goldsmiths, University of London 

Professor Agata Lapedriza, Northeastern University; Universitat Oberta de Catalunya 

Dr Paris Alexandros Lalousis, Kings College London* 

Dr Sophia McCully, The Nuffield Council on Bioethics 

Dr Rafael Mestre, Southampton University* 

Dr Thomas Mitchell, The Governance in AI Research Group (GAIRG)* 

Dr Max Rollwage, Limbic* 

Dr Annika Marie Schoene, Northeastern University 

Julia Smakman. Ada Lovelace Institute* 

John Tench, Wysa 

Associate Professor Stuart Middleton, Southampton University 

Alli Smith, Office for Life Sciences 

Mona Stylianou, Everyturn Mental Health* 

Dr James Thornton, The Governance in AI Research Group (GAIRG)* 

Dr Pauline Whelan, CareLoop* 

Dr Gwydion Williams, Wellcome Trust* 

Dr James Woollard, Oxleas NHS Foundation Trust, NHS England* 

Andy Wright, Everyturn Mental Health 

Emeritus Professor Jeremy Wyatt, The Governance in AI Research Group (GAIRG)* 

Information Commissioners Office* 

Members of the Software Team, Healthcare Quality and Access Group at the Medicines and Healthcare products Regulatory Agency (MHRA)* 

MHRA AI Airlock programme team 

The Joint Digital Policy Unit (a joint unit between the Transformation Directorate in NHS England, and DHSC)

Documents to download

 

Channel website: https://www.parliament.uk/post

Original article link: https://post.parliament.uk/research-briefings/post-pn-0738/

Share this article
Academic Fellowships Upcoming work POST Publications

The Parliamentary Office of Science and Technology (POST) is Parliament’s in-house source of scientific advice.

 

Latest News from
POST (Parliamentary Office of Science and Technology)

Breaking Down the Procurement Act 2023 Guide