The Unbearable Likeness of Being: How Artificial Intelligence Challenges the Social Ontology of International Human Rights Law
(2025) In The Journal of Cross-Disciplinary Research in Computational Law- Abstract
- This paper examines how the social ontology that underpins the international human rights framework is being challenged by the affordances of AI/ML systems. To set the stage, the paper adopts a socially situated understanding of human rights – acknowledging the socially embedded nature of individuals within societies. Drawing upon Gould’s theory on the social ontology of human rights, the individual is not only socially embedded but it is this social situatedness that enable the exercise of positive agency (including moral and political agency). The role of human rights is then to preserve conditions that enable the exercise of such capacities. While the ubiquity of computational technologies such as AI systems may prima facie seem to... (More)
- This paper examines how the social ontology that underpins the international human rights framework is being challenged by the affordances of AI/ML systems. To set the stage, the paper adopts a socially situated understanding of human rights – acknowledging the socially embedded nature of individuals within societies. Drawing upon Gould’s theory on the social ontology of human rights, the individual is not only socially embedded but it is this social situatedness that enable the exercise of positive agency (including moral and political agency). The role of human rights is then to preserve conditions that enable the exercise of such capacities. While the ubiquity of computational technologies such as AI systems may prima facie seem to embrace and operationalize sociality, the paper highlights three pressure points that it argues, lead towards the structural atomisation of individuals in ways that are in tension with the normative aims of international human rights law. Data points that group, infer and construct individuals through her likeness instrumentally atomizes individuals as means to an end through AI/ML systems. Further, the efficiency-driven framing of AI/ML reliant on computational tractability means that individuals riskinstrumentalization through optimization. Finally, the AI/ML mediated shaping of epistemic and enabling conditions can lead to contextual atomisation – threatening the condition antecedent for socially situated exercise of moral agency and with it, human rights.
In diagnosing these structural challenges, the paper provides a deeper mapping of the problem space to inform AI/ML and human rights scholars and practitioners in better accounting for the social ontology of human rights in our computational environments. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/6b2c0fbc-6fa9-4c84-8fa8-d33532878ecc
- author
- Teo, Sue Anne
LU
- organization
- publishing date
- 2025-03-25
- type
- Contribution to journal
- publication status
- published
- subject
- keywords
- Social ontology, Human rights, International human rights law, Artificial intelligence, Machine learning, AI, Mänskliga rättigheter
- in
- The Journal of Cross-Disciplinary Research in Computational Law
- ISSN
- 2736-4321
- language
- English
- LU publication?
- yes
- additional info
- Journal web site: https://journalcrcl.org/crcl
- id
- 6b2c0fbc-6fa9-4c84-8fa8-d33532878ecc
- alternative location
- https://journalcrcl.org/crcl/article/view/35
- date added to LUP
- 2023-09-07 10:27:00
- date last changed
- 2025-04-04 14:48:15
@article{6b2c0fbc-6fa9-4c84-8fa8-d33532878ecc, abstract = {{This paper examines how the social ontology that underpins the international human rights framework is being challenged by the affordances of AI/ML systems. To set the stage, the paper adopts a socially situated understanding of human rights – acknowledging the socially embedded nature of individuals within societies. Drawing upon Gould’s theory on the social ontology of human rights, the individual is not only socially embedded but it is this social situatedness that enable the exercise of positive agency (including moral and political agency). The role of human rights is then to preserve conditions that enable the exercise of such capacities. While the ubiquity of computational technologies such as AI systems may prima facie seem to embrace and operationalize sociality, the paper highlights three pressure points that it argues, lead towards the structural atomisation of individuals in ways that are in tension with the normative aims of international human rights law. Data points that group, infer and construct individuals through her likeness instrumentally atomizes individuals as means to an end through AI/ML systems. Further, the efficiency-driven framing of AI/ML reliant on computational tractability means that individuals riskinstrumentalization through optimization. Finally, the AI/ML mediated shaping of epistemic and enabling conditions can lead to contextual atomisation – threatening the condition antecedent for socially situated exercise of moral agency and with it, human rights.<br/><br/>In diagnosing these structural challenges, the paper provides a deeper mapping of the problem space to inform AI/ML and human rights scholars and practitioners in better accounting for the social ontology of human rights in our computational environments.}}, author = {{Teo, Sue Anne}}, issn = {{2736-4321}}, keywords = {{Social ontology; Human rights; International human rights law; Artificial intelligence; Machine learning; AI; Mänskliga rättigheter}}, language = {{eng}}, month = {{03}}, series = {{The Journal of Cross-Disciplinary Research in Computational Law}}, title = {{The Unbearable Likeness of Being: How Artificial Intelligence Challenges the Social Ontology of International Human Rights Law}}, url = {{https://lup.lub.lu.se/search/files/212269926/Unbearable_likeness_final_March_2025.pdf}}, year = {{2025}}, }