Rough Around the Edges or a Fundamental Disconnect? : (Re-)examining the theory and utility of human rights through the six systemic distortions afforded by artificial intelligence systems
(2023)- Abstract
- The ubiquitous deployment of artificial intelligence (AI) technologies affects an array of human rights, raising concerns around issues of discrimination, privacy, freedom of expression, information and data protection. However, just as regulators and policy-makers call for technological design to respect existing human rights, others debate whether human rights are robust enough to counter new challenges posed by AI. This contribution takes a legal- philosophical approach, engaging the intersections of human rights law and theory, philosophy of technology, and law and technology in order to examine whether the theory and practice of the human rights law framework can address the systemic distortions afforded by AI systems. It identifies... (More)
- The ubiquitous deployment of artificial intelligence (AI) technologies affects an array of human rights, raising concerns around issues of discrimination, privacy, freedom of expression, information and data protection. However, just as regulators and policy-makers call for technological design to respect existing human rights, others debate whether human rights are robust enough to counter new challenges posed by AI. This contribution takes a legal- philosophical approach, engaging the intersections of human rights law and theory, philosophy of technology, and law and technology in order to examine whether the theory and practice of the human rights law framework can address the systemic distortions afforded by AI systems. It identifies six systemic distortions, namely intangibility, ephemerality, modulation, the comparison deficit, the utilitarian logic, and, finally, the objectification of dividualised identities. These map on to (and challenge), respectively, the implicit observability, categorical legal groups, control, causality and foreseeability, the deontological motivation of human rights law, and the subjective sociality of individuals. These systemic distortions pose both a procedural challenge for individuals seeking to mount human rights claims, and a normative challenge to the formative aims of human rights law.
The contribution finds that these challenges are non-trivial to the human rights law framework, impacting its practical sustainability and relevance in the age of AI. However, a silver lining can be found within the normative foundation of the human rights framework itself, through the reinterpretation of human dignity as human vulnerability. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/ee9f2efc-7696-4e5e-a711-9b6a341e35ae
- author
- Teo, Sue Anne LU
- organization
- publishing date
- 2023
- type
- Chapter in Book/Report/Conference proceeding
- publication status
- published
- subject
- keywords
- Human rights, Mänskliga rättigheter
- host publication
- European Yearbook on Human Rights 2023
- publisher
- Intersentia
- ISBN
- 978-1839704161
- language
- English
- LU publication?
- yes
- id
- ee9f2efc-7696-4e5e-a711-9b6a341e35ae
- date added to LUP
- 2023-09-07 10:10:47
- date last changed
- 2025-04-04 14:16:12
@inbook{ee9f2efc-7696-4e5e-a711-9b6a341e35ae, abstract = {{The ubiquitous deployment of artificial intelligence (AI) technologies affects an array of human rights, raising concerns around issues of discrimination, privacy, freedom of expression, information and data protection. However, just as regulators and policy-makers call for technological design to respect existing human rights, others debate whether human rights are robust enough to counter new challenges posed by AI. This contribution takes a legal- philosophical approach, engaging the intersections of human rights law and theory, philosophy of technology, and law and technology in order to examine whether the theory and practice of the human rights law framework can address the systemic distortions afforded by AI systems. It identifies six systemic distortions, namely intangibility, ephemerality, modulation, the comparison deficit, the utilitarian logic, and, finally, the objectification of dividualised identities. These map on to (and challenge), respectively, the implicit observability, categorical legal groups, control, causality and foreseeability, the deontological motivation of human rights law, and the subjective sociality of individuals. These systemic distortions pose both a procedural challenge for individuals seeking to mount human rights claims, and a normative challenge to the formative aims of human rights law.<br/><br/>The contribution finds that these challenges are non-trivial to the human rights law framework, impacting its practical sustainability and relevance in the age of AI. However, a silver lining can be found within the normative foundation of the human rights framework itself, through the reinterpretation of human dignity as human vulnerability.}}, author = {{Teo, Sue Anne}}, booktitle = {{European Yearbook on Human Rights 2023}}, isbn = {{978-1839704161}}, keywords = {{Human rights; Mänskliga rättigheter}}, language = {{eng}}, publisher = {{Intersentia}}, title = {{Rough Around the Edges or a Fundamental Disconnect? : (Re-)examining the theory and utility of human rights through the six systemic distortions afforded by artificial intelligence systems}}, year = {{2023}}, }