Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

AI-based Automated Decision Making: An investigative study on how it impacts the rule of law and the case for regulatory safeguards

Stevens, Sean LU (2022) SOLM02 20221
Department of Sociology of Law
Abstract
The development and expansion of artificial intelligence have significant potential to benefit humanity; however, the risks posed by AI-related tools have also become a growing concern over the past decade. From the standpoint of human rights violations AI-related bias, discriminatory practices, data protection practices and violations or potential infringements on fundamental rights are some of the core concerns revolving around this evolving technology.

This research inquiry primarily focuses on investigating ongoing discourse around AI-based digital surveillance, predictive policing and assessing the prospective contributions by automated decision-making. The study will critically review and discuss the impact AI-based technology has... (More)
The development and expansion of artificial intelligence have significant potential to benefit humanity; however, the risks posed by AI-related tools have also become a growing concern over the past decade. From the standpoint of human rights violations AI-related bias, discriminatory practices, data protection practices and violations or potential infringements on fundamental rights are some of the core concerns revolving around this evolving technology.

This research inquiry primarily focuses on investigating ongoing discourse around AI-based digital surveillance, predictive policing and assessing the prospective contributions by automated decision-making. The study will critically review and discuss the impact AI-based technology has on policing, law enforcement and the rule of law in a democratic society, and how it could potentially influence the broader aspects of social justice. Moreover, this research
inquiry investigates and critiques the ‘biases’ that allegedly exist within AI-based systems and deployment practices that have impacted certain communities more than others. The study focuses primarily on Europe and the U.S., with potential broader ramifications for other countries.

Accordingly, the research examines the need for enhanced legal safeguards, i.e., regulatory intervention, which has been a long-standing and ongoing public request. Consequently, this investigation was carried out through a discourse analysis of European and American cases on
this topic, supplemented by content analysis of the various EU regulatory and legislative provisions, supported by a qualitative research mixed-method approach, including participant interviews with industry practitioners and impacted families. This research paper would complement the current research on the consequences of AI practices involving automated
decision-making and contributes towards challenging the current AI-related industry policies and practices concerning transparency and accountability.

It is therefore of utmost importance to constantly question the EU’s powerful position from an accountability standpoint. This includes the need for its attention and intervention, towards certain ‘private actors’ (which include large-scale multinational tech giants) and their relationship with state agencies. This is particularly important in the current context, where most public services and functions are increasingly being outsourced to and carried out by the very same ‘private actors’ using AI tools that are largely self-regulated. (Less)
Please use this url to cite or link to this publication:
author
Stevens, Sean LU
supervisor
organization
course
SOLM02 20221
year
type
H2 - Master's Degree (Two Years)
subject
keywords
Artificial intelligence, automating governance, automated decision making, rule of law, transparency, accountability, profiling, predictive policing, surveillance capitalism
language
English
id
9104598
date added to LUP
2023-01-18 08:40:35
date last changed
2023-01-18 08:40:35
@misc{9104598,
  abstract     = {{The development and expansion of artificial intelligence have significant potential to benefit humanity; however, the risks posed by AI-related tools have also become a growing concern over the past decade. From the standpoint of human rights violations AI-related bias, discriminatory practices, data protection practices and violations or potential infringements on fundamental rights are some of the core concerns revolving around this evolving technology.

This research inquiry primarily focuses on investigating ongoing discourse around AI-based digital surveillance, predictive policing and assessing the prospective contributions by automated decision-making. The study will critically review and discuss the impact AI-based technology has on policing, law enforcement and the rule of law in a democratic society, and how it could potentially influence the broader aspects of social justice. Moreover, this research 
inquiry investigates and critiques the ‘biases’ that allegedly exist within AI-based systems and deployment practices that have impacted certain communities more than others. The study focuses primarily on Europe and the U.S., with potential broader ramifications for other countries. 

Accordingly, the research examines the need for enhanced legal safeguards, i.e., regulatory intervention, which has been a long-standing and ongoing public request. Consequently, this investigation was carried out through a discourse analysis of European and American cases on
this topic, supplemented by content analysis of the various EU regulatory and legislative provisions, supported by a qualitative research mixed-method approach, including participant interviews with industry practitioners and impacted families. This research paper would complement the current research on the consequences of AI practices involving automated 
decision-making and contributes towards challenging the current AI-related industry policies and practices concerning transparency and accountability. 

It is therefore of utmost importance to constantly question the EU’s powerful position from an accountability standpoint. This includes the need for its attention and intervention, towards certain ‘private actors’ (which include large-scale multinational tech giants) and their relationship with state agencies. This is particularly important in the current context, where most public services and functions are increasingly being outsourced to and carried out by the very same ‘private actors’ using AI tools that are largely self-regulated.}},
  author       = {{Stevens, Sean}},
  language     = {{eng}},
  note         = {{Student Paper}},
  title        = {{AI-based Automated Decision Making: An investigative study on how it impacts the rule of law and the case for regulatory safeguards}},
  year         = {{2022}},
}