Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

Skadeståndsrätt i den digitala eran - Ansvarsproblematiken för artificiell intelligens

Dib, Danny LU (2023) LAGF03 20231
Department of Law
Faculty of Law
Abstract (Swedish)
AI-systemens snabba utveckling har på senare år fört teknologin till nya höjder. Framväxten av autonoma fordon och avancerade språkmodeller som ChatCPT har ställt diskussionen kring AI-systemens rättsliga problematik på sin spets.

Bland annat har skadeståndsrättens reparativa syfte blivit svårare att uppnå till följd av att skador orsakade av AI-systemen komplicerar preciseringen av ansvarande part. För att bemöta detta har EU föreslagit två nya direktiv som behandlar skadeståndsansvar i relation till AI-system.

Syftet med denna uppsats är att i första hand undersöka hur skador orsakade av AI-system kan resultera i en ansvarsproblematik, och i andra hand utreda huruvida direktiven kan betraktas som adekvata lösningar på den rättsliga... (More)
AI-systemens snabba utveckling har på senare år fört teknologin till nya höjder. Framväxten av autonoma fordon och avancerade språkmodeller som ChatCPT har ställt diskussionen kring AI-systemens rättsliga problematik på sin spets.

Bland annat har skadeståndsrättens reparativa syfte blivit svårare att uppnå till följd av att skador orsakade av AI-systemen komplicerar preciseringen av ansvarande part. För att bemöta detta har EU föreslagit två nya direktiv som behandlar skadeståndsansvar i relation till AI-system.

Syftet med denna uppsats är att i första hand undersöka hur skador orsakade av AI-system kan resultera i en ansvarsproblematik, och i andra hand utreda huruvida direktiven kan betraktas som adekvata lösningar på den rättsliga problematiken.

Slutsatsen därav är att komplexiteten av AI-systemens tekniska funktionssätt kan leda till utdata som inte enkelt kan härledas till en specifik intern process. Detta så kallade ”black box-problem” leder till att dessa system karaktäriseras av brist på transparens och oförutsägbarhet. Eftersom AI-system ofta bygger på beståndsdelar från olika leverantörer, kan detta tillsammans med systemens inneboende egenskaper komplicera preciseringen av ansvarande part när skador inträffat.

Direktivens föreslagna regleringar framstår vara ett steg i rätt riktning till att lösa ansvarsproblematiken, men det ifrågasätts huruvida regleringarna i praktiken uppnår en tillräckligt balanserad risk- och kostnadsfördelning. (Less)
Abstract
In recent years, the rapid development of AI systems has elevated the technology to new heights. The emergence of autonomous vehicles and advanced language models like ChatCPT has brought the discussion around the legal issues of AI systems to the forefront.

The primary aim of tort law is to compensate the injured parties and impose liability on the parties responsible. Due to the complex nature of AI systems, it has become increasingly difficult to identify the liable party. To address this, the European Union has proposed two new directives that take aim at the liability issues caused by AI systems.

Consequently, this paper first aims to examine why it’s difficult to identify a liable party when damages are caused by AI systems.... (More)
In recent years, the rapid development of AI systems has elevated the technology to new heights. The emergence of autonomous vehicles and advanced language models like ChatCPT has brought the discussion around the legal issues of AI systems to the forefront.

The primary aim of tort law is to compensate the injured parties and impose liability on the parties responsible. Due to the complex nature of AI systems, it has become increasingly difficult to identify the liable party. To address this, the European Union has proposed two new directives that take aim at the liability issues caused by AI systems.

Consequently, this paper first aims to examine why it’s difficult to identify a liable party when damages are caused by AI systems. Secondly, the objective is to investigate whether the two directives adequately solve these liability issues.

The paper concludes that due to the complexity of how AI systems function, the resulting outputs cannot easily be traced back to a specific internal process. This so-called “black box problem” results in a lack of transparency and unpredictability in these systems. These inherent characteristics can, in combination with the fact that AI systems often incorporate components from different suppliers, complicate the identification of the liable party when damages occur.

The two directives proposed regulations appear to be a step in the right direction to resolve these issues. However, there are concerns about whether these regulations, in practice, achieve a sufficiently balanced distribution of risks and costs. (Less)
Please use this url to cite or link to this publication:
author
Dib, Danny LU
supervisor
organization
course
LAGF03 20231
year
type
M2 - Bachelor Degree
subject
keywords
EU-rätt, skadeståndsrätt, ai, artificiell intelligens, ansvarsproblematik
language
Swedish
id
9116413
date added to LUP
2023-06-29 09:47:02
date last changed
2023-06-29 09:47:02
@misc{9116413,
  abstract     = {{In recent years, the rapid development of AI systems has elevated the technology to new heights. The emergence of autonomous vehicles and advanced language models like ChatCPT has brought the discussion around the legal issues of AI systems to the forefront.

The primary aim of tort law is to compensate the injured parties and impose liability on the parties responsible. Due to the complex nature of AI systems, it has become increasingly difficult to identify the liable party. To address this, the European Union has proposed two new directives that take aim at the liability issues caused by AI systems.

Consequently, this paper first aims to examine why it’s difficult to identify a liable party when damages are caused by AI systems. Secondly, the objective is to investigate whether the two directives adequately solve these liability issues.

The paper concludes that due to the complexity of how AI systems function, the resulting outputs cannot easily be traced back to a specific internal process. This so-called “black box problem” results in a lack of transparency and unpredictability in these systems. These inherent characteristics can, in combination with the fact that AI systems often incorporate components from different suppliers, complicate the identification of the liable party when damages occur.

The two directives proposed regulations appear to be a step in the right direction to resolve these issues. However, there are concerns about whether these regulations, in practice, achieve a sufficiently balanced distribution of risks and costs.}},
  author       = {{Dib, Danny}},
  language     = {{swe}},
  note         = {{Student Paper}},
  title        = {{Skadeståndsrätt i den digitala eran - Ansvarsproblematiken för artificiell intelligens}},
  year         = {{2023}},
}