Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Artificial intelligence and its ‘slow violence’ to human rights

Teo, Sue Anne LU (2024) In AI and Ethics
Abstract
Human rights concerns in relation to the impacts brought forth by artificial intelligence (‘AI’) have revolved around examining how it affects specific rights, such as the right to privacy, non-discrimination and freedom of expression. However, this article argues that the effects go deeper, potentially challenging the foundational assumptions of key concepts and normative justifications of the human rights framework. To unpack this, the article applies the lens of ‘slow violence’, a term borrowed from environmental justice literature, to frame the grinding, gradual, attritional harms of AI towards the human rights framework.

The article examines the slow violence of AI towards human rights at three different levels. First, the... (More)
Human rights concerns in relation to the impacts brought forth by artificial intelligence (‘AI’) have revolved around examining how it affects specific rights, such as the right to privacy, non-discrimination and freedom of expression. However, this article argues that the effects go deeper, potentially challenging the foundational assumptions of key concepts and normative justifications of the human rights framework. To unpack this, the article applies the lens of ‘slow violence’, a term borrowed from environmental justice literature, to frame the grinding, gradual, attritional harms of AI towards the human rights framework.

The article examines the slow violence of AI towards human rights at three different levels. First, the individual as the subject of interest and protection within the human rights framework, is increasingly unable to understand nor seek accountability for harms arising from the deployment of AI systems. This undermines the key premise of the framework which was meant to empower the individual in addressing large power disparities and calling for accountability towards such abuse of power. Secondly, the ‘slow violence’ of AI is also seen through the unravelling of the normative justifications of discrete rights such as the right to privacy, freedom of expression and freedom of thought, upending the reasons and assumptions in which those rights were formulated and formalised in the first place. Finally, the article examines how even the wide interpretations towards the normative foundation of human rights, namely human dignity, is unable to address putative new challenges AI poses towards the concept. It then considers and offers the outline to critical perspectives that can inform a new model of human rights accountability in the age of AI. (Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Contribution to journal
publication status
epub
subject
keywords
Human rights, Mänskliga rättigheter
in
AI and Ethics
publisher
Springer
ISSN
2730-5953
DOI
10.1007/s43681-024-00547-x
language
English
LU publication?
yes
id
9d0b810e-d68a-4be5-b0a9-4814aaf55de1
date added to LUP
2024-08-22 14:41:34
date last changed
2024-08-23 09:30:15
@article{9d0b810e-d68a-4be5-b0a9-4814aaf55de1,
  abstract     = {{Human rights concerns in relation to the impacts brought forth by artificial intelligence (‘AI’) have revolved around examining how it affects specific rights, such as the right to privacy, non-discrimination and freedom of expression. However, this article argues that the effects go deeper, potentially challenging the foundational assumptions of key concepts and normative justifications of the human rights framework. To unpack this, the article applies the lens of ‘slow violence’, a term borrowed from environmental justice literature, to frame the grinding, gradual, attritional harms of AI towards the human rights framework.<br/><br/>The article examines the slow violence of AI towards human rights at three different levels. First, the individual as the subject of interest and protection within the human rights framework, is increasingly unable to understand nor seek accountability for harms arising from the deployment of AI systems. This undermines the key premise of the framework which was meant to empower the individual in addressing large power disparities and calling for accountability towards such abuse of power. Secondly, the ‘slow violence’ of AI is also seen through the unravelling of the normative justifications of discrete rights such as the right to privacy, freedom of expression and freedom of thought, upending the reasons and assumptions in which those rights were formulated and formalised in the first place. Finally, the article examines how even the wide interpretations towards the normative foundation of human rights, namely human dignity, is unable to address putative new challenges AI poses towards the concept. It then considers and offers the outline to critical perspectives that can inform a new model of human rights accountability in the age of AI.}},
  author       = {{Teo, Sue Anne}},
  issn         = {{2730-5953}},
  keywords     = {{Human rights; Mänskliga rättigheter}},
  language     = {{eng}},
  month        = {{08}},
  publisher    = {{Springer}},
  series       = {{AI and Ethics}},
  title        = {{Artificial intelligence and its ‘slow violence’ to human rights}},
  url          = {{http://dx.doi.org/10.1007/s43681-024-00547-x}},
  doi          = {{10.1007/s43681-024-00547-x}},
  year         = {{2024}},
}