Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

The Socio-Legal Relevance of Artificial Intelligence

Larsson, Stefan LU (2019) In Droit et Société 103(3). p.573-573
Abstract
This article draws on socio-legal theory in relation to growing concerns over fairness, accountability and transparency of societally applied artificial intelligence (AI) and machine learning. The purpose is to contribute to a broad socio-legal orientation by describing legal and normative challenges posed by applied AI. To do so, the article first analyzes a set of problematic cases, e.g., image recognition based on gender-biased databases. It then presents seven aspects of transparency that may complement notions of explainable AI (XAI) within AI-research undertaken by computer scientists. The article finally discusses the normative mirroring effect of using human values and societal structures as training data for learning technologies;... (More)
This article draws on socio-legal theory in relation to growing concerns over fairness, accountability and transparency of societally applied artificial intelligence (AI) and machine learning. The purpose is to contribute to a broad socio-legal orientation by describing legal and normative challenges posed by applied AI. To do so, the article first analyzes a set of problematic cases, e.g., image recognition based on gender-biased databases. It then presents seven aspects of transparency that may complement notions of explainable AI (XAI) within AI-research undertaken by computer scientists. The article finally discusses the normative mirroring effect of using human values and societal structures as training data for learning technologies; it concludes by arguing for the need for a multidisciplinary approach in AI research, development, and governance. (Less)
Abstract (Swedish)
This article draws on socio-legal theory in relation to growing concerns over fairness, accountability and transparency of societally applied artificial intelligence (AI) and machine learning. The purpose is to contribute to a broad socio-legal orientation by describing legal and normative challenges posed by applied AI. To do so, the article first analyses a set of problematic cases, e.g. image recognition based on gender-biased databases. It then presents seven aspects of transparency that may complement notions of explainable AI within computer scientific AI-research. The article finally discusses the normative mirroring effect of using human values and societal structures as training data for learning technologies, and concludes by... (More)
This article draws on socio-legal theory in relation to growing concerns over fairness, accountability and transparency of societally applied artificial intelligence (AI) and machine learning. The purpose is to contribute to a broad socio-legal orientation by describing legal and normative challenges posed by applied AI. To do so, the article first analyses a set of problematic cases, e.g. image recognition based on gender-biased databases. It then presents seven aspects of transparency that may complement notions of explainable AI within computer scientific AI-research. The article finally discusses the normative mirroring effect of using human values and societal structures as training data for learning technologies, and concludes by arguing for the need for a multidisciplinary approach in AI research, development and governance. (Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
applied artificial intelligence, AI and normativity, algorithmic accountability and normative design, AI transparency, AI & society, FAT, Sociology of Law, Algorithmic accountability and normative design, Applied artificial intelligence, Explainable AI and algorithmic transparency, Machine learning and law, Technology and Social change
in
Droit et Société
volume
103
issue
3
pages
593 pages
publisher
Ed. juridiques associées
external identifiers
  • scopus:85088933967
ISSN
2550-9578
project
Lund University AI Research
Ramverk för Hållbar AI
DATA/TRUST: Tillitsbaserad personuppgiftshantering i den digitala ekonomin
AIR Lund - Artificially Intelligent use of Registers
language
English
LU publication?
yes
additional info
Stefan Larsson is a lawyer (LLM) and Associate Professor in Technology and Social Change at Lund University, Department of Technology and Society. He holds a PhD in Sociology of Law as well as a PhD in Spatial Planning. In addition, Dr. Larsson is a senior researcher and head of the Digital Society program at the Swedish think tank Fores and scientific advisor for the Swedish Consumer Agency as well as the AI Sustainability Center. His research focuses on issues of trust and transparency on digital, data-driven markets, and the socio-legal impact of autonomous and AI-driven technologies. Among his publications: — “Algorithmic Governance and the Need for Consumer Empowerment in Data-Driven Markets”, Internet Policy Review, 7 (2), 2018. — Conceptions in the Code. How Metaphors Explain Legal Challenges in Digital Times, Oxford: Oxford University Press, 2017.
id
4d168a73-f6cf-4c65-ab0c-26fb9dbd3bf0
date added to LUP
2019-08-23 10:44:10
date last changed
2024-04-16 17:04:42
@article{4d168a73-f6cf-4c65-ab0c-26fb9dbd3bf0,
  abstract     = {{This article draws on socio-legal theory in relation to growing concerns over fairness, accountability and transparency of societally applied artificial intelligence (AI) and machine learning. The purpose is to contribute to a broad socio-legal orientation by describing legal and normative challenges posed by applied AI. To do so, the article first analyzes a set of problematic cases, e.g., image recognition based on gender-biased databases. It then presents seven aspects of transparency that may complement notions of explainable AI (XAI) within AI-research undertaken by computer scientists. The article finally discusses the normative mirroring effect of using human values and societal structures as training data for learning technologies; it concludes by arguing for the need for a multidisciplinary approach in AI research, development, and governance.}},
  author       = {{Larsson, Stefan}},
  issn         = {{2550-9578}},
  keywords     = {{applied artificial intelligence; AI and normativity; algorithmic accountability and normative design; AI transparency; AI & society; FAT; Sociology of Law; Algorithmic accountability and normative design; Applied artificial intelligence; Explainable AI and algorithmic transparency; Machine learning and law; Technology and Social change}},
  language     = {{eng}},
  month        = {{12}},
  number       = {{3}},
  pages        = {{573--573}},
  publisher    = {{Ed. juridiques associées}},
  series       = {{Droit et Société}},
  title        = {{The Socio-Legal Relevance of Artificial Intelligence}},
  url          = {{https://lup.lub.lu.se/search/files/73024202/Stefan_Larsson_2019_THE_SOCIO_LEGAL_RELEVANCE_OF_ARTIFICIAL_INTELLIGENCE.pdf}},
  volume       = {{103}},
  year         = {{2019}},
}