Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Ignorance and the regulation of artificial intelligence

White, James LU and Lidskog, Rolf (2022) In Journal of Risk Research 25(4). p.488-500
Abstract
Much has been written about the risks posed by artificial intelligence (AI). This article is interested not only in what is known about these risks, but what remains unknown and how that unknowing is and should be approached. By reviewing and expanding on the scientific literature, it explores how social knowledge contributes to the understanding of AI and its regulatory challenges. The analysis is conducted in three steps. First, the article investigates risks associated with AI and shows how social scientists have challenged technically-oriented approaches that treat the social instrumentally. It then identifies the invisible and visible characteristics of AI, and argues that not only is it hard for outsiders to comprehend risks attached... (More)
Much has been written about the risks posed by artificial intelligence (AI). This article is interested not only in what is known about these risks, but what remains unknown and how that unknowing is and should be approached. By reviewing and expanding on the scientific literature, it explores how social knowledge contributes to the understanding of AI and its regulatory challenges. The analysis is conducted in three steps. First, the article investigates risks associated with AI and shows how social scientists have challenged technically-oriented approaches that treat the social instrumentally. It then identifies the invisible and visible characteristics of AI, and argues that not only is it hard for outsiders to comprehend risks attached to the technology, but also for developers and researchers. Finally, it asserts the need to better recognise ignorance of AI, and explores what this means for how their risks are handled. The article concludes by stressing that proper regulation demands not only independent social knowledge about the pervasiveness, economic embeddedness and fragmented regulation of AI, but a social non-knowledge that is attuned to its complexity, and inhuman and incomprehensible behaviour. In properly allowing for ignorance of its social implications, the regulation of AI can proceed in a more modest, situated, plural and ultimately robust manner. (Less)
Please use this url to cite or link to this publication:
author
and
publishing date
type
Contribution to journal
publication status
published
subject
in
Journal of Risk Research
volume
25
issue
4
pages
488 - 500
publisher
Routledge
external identifiers
  • scopus:85112599698
ISSN
1366-9877
DOI
10.1080/13669877.2021.1957985
language
English
LU publication?
no
id
129b71d5-0b78-4757-920d-1d4ae43b6717
date added to LUP
2023-03-03 11:28:36
date last changed
2023-03-06 09:00:04
@article{129b71d5-0b78-4757-920d-1d4ae43b6717,
  abstract     = {{Much has been written about the risks posed by artificial intelligence (AI). This article is interested not only in what is known about these risks, but what remains unknown and how that unknowing is and should be approached. By reviewing and expanding on the scientific literature, it explores how social knowledge contributes to the understanding of AI and its regulatory challenges. The analysis is conducted in three steps. First, the article investigates risks associated with AI and shows how social scientists have challenged technically-oriented approaches that treat the social instrumentally. It then identifies the invisible and visible characteristics of AI, and argues that not only is it hard for outsiders to comprehend risks attached to the technology, but also for developers and researchers. Finally, it asserts the need to better recognise ignorance of AI, and explores what this means for how their risks are handled. The article concludes by stressing that proper regulation demands not only independent social knowledge about the pervasiveness, economic embeddedness and fragmented regulation of AI, but a social non-knowledge that is attuned to its complexity, and inhuman and incomprehensible behaviour. In properly allowing for ignorance of its social implications, the regulation of AI can proceed in a more modest, situated, plural and ultimately robust manner.}},
  author       = {{White, James and Lidskog, Rolf}},
  issn         = {{1366-9877}},
  language     = {{eng}},
  number       = {{4}},
  pages        = {{488--500}},
  publisher    = {{Routledge}},
  series       = {{Journal of Risk Research}},
  title        = {{Ignorance and the regulation of artificial intelligence}},
  url          = {{http://dx.doi.org/10.1080/13669877.2021.1957985}},
  doi          = {{10.1080/13669877.2021.1957985}},
  volume       = {{25}},
  year         = {{2022}},
}