Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

Who is responsible if an AI system gives a wrong diagnosis? Analysis of the EU liability law framework of medical AI

Rietzler, Magdalena LU (2022) JAEM03 20221
Department of Law
Faculty of Law
Abstract
AI systems are part of our daily lives and not only science fiction. In the healthcare sector are medical AI systems used to monitor patients, compare x-rays in order to detect diseases, or to even make a diagnosis. These AI systems help healthcare providers to make the work of doctors and nurses more efficient and to ensure the best service for their patients. However, next to these benefits come such new technologies also with never-before-seen challenges. The media reports, e.g. about cyberattacks or data leaks which can lead to data theft. But what happens when not only the data gets stolen, but the medical AI system gives a wrong diagnose which leads to the wrong treatment? Or who is liable if an AI system discriminates and prefers... (More)
AI systems are part of our daily lives and not only science fiction. In the healthcare sector are medical AI systems used to monitor patients, compare x-rays in order to detect diseases, or to even make a diagnosis. These AI systems help healthcare providers to make the work of doctors and nurses more efficient and to ensure the best service for their patients. However, next to these benefits come such new technologies also with never-before-seen challenges. The media reports, e.g. about cyberattacks or data leaks which can lead to data theft. But what happens when not only the data gets stolen, but the medical AI system gives a wrong diagnose which leads to the wrong treatment? Or who is liable if an AI system discriminates and prefers white over black patients? These questions have led to discussions in the EU and its member states since years and the first guidelines as well as legal frameworks have been presented to tackle the issues of AI. This thesis analysis the current legal framework of the EU as well as the German legislation as an example for national law to see if the current legal liability framework is sufficient to tackle these new issues. Whereas fundamental rights and the GDPR have efficient safeguards in place to tackle liability issues of AI systems, the Product Liability Directive does not cover these systems enough. However, the European Commission is aware of this and has already conducted a public consultation about a revision of this directive. Furthermore, examines this work if the AI Act and the European Parliament’s resolution on civil liability for AI can close the gaps. Both proposals follow a risk-based approach, however, the AI Act does not entail liability rules, but it introduces obligations and requirements for high-risk AI systems to make them safe. This framework is a good starting point in order to tackle challenges which arise from the use of AI systems. (Less)
Please use this url to cite or link to this publication:
author
Rietzler, Magdalena LU
supervisor
organization
course
JAEM03 20221
year
type
H2 - Master's Degree (Two Years)
subject
language
English
id
9096342
date added to LUP
2022-08-25 10:11:17
date last changed
2022-08-25 10:11:17
@misc{9096342,
  abstract     = {{AI systems are part of our daily lives and not only science fiction. In the healthcare sector are medical AI systems used to monitor patients, compare x-rays in order to detect diseases, or to even make a diagnosis. These AI systems help healthcare providers to make the work of doctors and nurses more efficient and to ensure the best service for their patients. However, next to these benefits come such new technologies also with never-before-seen challenges. The media reports, e.g. about cyberattacks or data leaks which can lead to data theft. But what happens when not only the data gets stolen, but the medical AI system gives a wrong diagnose which leads to the wrong treatment? Or who is liable if an AI system discriminates and prefers white over black patients? These questions have led to discussions in the EU and its member states since years and the first guidelines as well as legal frameworks have been presented to tackle the issues of AI. This thesis analysis the current legal framework of the EU as well as the German legislation as an example for national law to see if the current legal liability framework is sufficient to tackle these new issues. Whereas fundamental rights and the GDPR have efficient safeguards in place to tackle liability issues of AI systems, the Product Liability Directive does not cover these systems enough. However, the European Commission is aware of this and has already conducted a public consultation about a revision of this directive. Furthermore, examines this work if the AI Act and the European Parliament’s resolution on civil liability for AI can close the gaps. Both proposals follow a risk-based approach, however, the AI Act does not entail liability rules, but it introduces obligations and requirements for high-risk AI systems to make them safe. This framework is a good starting point in order to tackle challenges which arise from the use of AI systems.}},
  author       = {{Rietzler, Magdalena}},
  language     = {{eng}},
  note         = {{Student Paper}},
  title        = {{Who is responsible if an AI system gives a wrong diagnosis? Analysis of the EU liability law framework of medical AI}},
  year         = {{2022}},
}