Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

The Role of AI in Mental Health Applications and Liability

Müllerová, Petra LU (2023) In YSEC Yearbook of Socio-Economic Constitutions 2023. p.275-310
Abstract
The COVID-19 pandemic has affected the entire area of health care, including care provided to patients with mental health problems. Due to the stressful nature of the pandemic, the number of patients experiencing mental health problems, especially depression or anxiety, has increased. Even well-before the pandemic, Europe struggled with a lack of mental health care, which was especially caused by the long waiting times. The problem seems to have been solved by the plethora of mental health applications that are freely available on the market. Given the user’s accessibility to these applications, I decided to scrutinise the safety of using AI in these health apps, with a particular focus on chatbots. I examined whether existing European... (More)
The COVID-19 pandemic has affected the entire area of health care, including care provided to patients with mental health problems. Due to the stressful nature of the pandemic, the number of patients experiencing mental health problems, especially depression or anxiety, has increased. Even well-before the pandemic, Europe struggled with a lack of mental health care, which was especially caused by the long waiting times. The problem seems to have been solved by the plethora of mental health applications that are freely available on the market. Given the user’s accessibility to these applications, I decided to scrutinise the safety of using AI in these health apps, with a particular focus on chatbots. I examined whether existing European legislation may protect users from possible harm to their health and require these mental health applications to be certified as medical devices.

After analysing the Product Liability Directive and the upcoming legislation focused on liability associated with AI, I must state that there is insufficient transparency and protection for users of these applications. Based on experience from the user’s perspective, I have identified the lack of (1) scheduling an appointment with a healthcare professional, (2) human oversight, and (3) transparency as regards the type of AI used. Due to the ‘black box problem’, it is likely that the user who was harmed will not be able to get compensation because of the difficulty of proving causality between the defect and the damage. (Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
YSEC Yearbook of Socio-Economic Constitutions 2023
series title
YSEC Yearbook of Socio-Economic Constitutions
volume
2023
pages
35 pages
external identifiers
  • scopus:86000504200
ISSN
2662-7132
2662-7124
DOI
10.1007/16495_2023_60
language
English
LU publication?
yes
id
c09eee0f-57dd-413e-adfa-30db57673a2c
date added to LUP
2024-03-18 11:42:28
date last changed
2025-07-18 08:19:58
@inbook{c09eee0f-57dd-413e-adfa-30db57673a2c,
  abstract     = {{The COVID-19 pandemic has affected the entire area of health care, including care provided to patients with mental health problems. Due to the stressful nature of the pandemic, the number of patients experiencing mental health problems, especially depression or anxiety, has increased. Even well-before the pandemic, Europe struggled with a lack of mental health care, which was especially caused by the long waiting times. The problem seems to have been solved by the plethora of mental health applications that are freely available on the market. Given the user’s accessibility to these applications, I decided to scrutinise the safety of using AI in these health apps, with a particular focus on chatbots. I examined whether existing European legislation may protect users from possible harm to their health and require these mental health applications to be certified as medical devices.<br/><br/>After analysing the Product Liability Directive and the upcoming legislation focused on liability associated with AI, I must state that there is insufficient transparency and protection for users of these applications. Based on experience from the user’s perspective, I have identified the lack of (1) scheduling an appointment with a healthcare professional, (2) human oversight, and (3) transparency as regards the type of AI used. Due to the ‘black box problem’, it is likely that the user who was harmed will not be able to get compensation because of the difficulty of proving causality between the defect and the damage.}},
  author       = {{Müllerová, Petra}},
  booktitle    = {{YSEC Yearbook of Socio-Economic Constitutions 2023}},
  issn         = {{2662-7132}},
  language     = {{eng}},
  month        = {{11}},
  pages        = {{275--310}},
  series       = {{YSEC Yearbook of Socio-Economic Constitutions}},
  title        = {{The Role of AI in Mental Health Applications and Liability}},
  url          = {{http://dx.doi.org/10.1007/16495_2023_60}},
  doi          = {{10.1007/16495_2023_60}},
  volume       = {{2023}},
  year         = {{2023}},
}