Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Stabilizing Translucencies : Governing AI transparency by standardization

Högberg, Charlotte LU orcid (2024) In Big Data and Society 11(1).
Abstract
Standards are put forward as important means to turn the ideals of ethical and responsible artificial intelligence into practice. One principle targeted for standardization is transparency. This article attends to the tension between standardization and transparency, by combining a theoretical exploration of these concepts with an empirical analysis of standardizations of artificial intelligence transparency. Conceptually, standards are underpinned by goals of stability and solidification, while transparency is considered a flexible see-through quality. In addition, artificial intelligence-technologies are depicted as ‘black boxed’, complex and in flux. Transparency as a solution for ethical artificial intelligence has, however, been... (More)
Standards are put forward as important means to turn the ideals of ethical and responsible artificial intelligence into practice. One principle targeted for standardization is transparency. This article attends to the tension between standardization and transparency, by combining a theoretical exploration of these concepts with an empirical analysis of standardizations of artificial intelligence transparency. Conceptually, standards are underpinned by goals of stability and solidification, while transparency is considered a flexible see-through quality. In addition, artificial intelligence-technologies are depicted as ‘black boxed’, complex and in flux. Transparency as a solution for ethical artificial intelligence has, however, been problematized. In the empirical sample of standardizations, transparency is largely presented as a static, measurable, and straightforward information transfer, or as a window to artificial intelligence use. The standards are furthermore described as pioneering and able to shape technological futures, while their similarities suggest that artificial intelligence translucencies are already stabilizing into similar arrangements. To rely heavily upon standardization to govern artificial intelligence transparency still risks allocating rule-making to non-democratic processes, and while intended to bring clarity, the standardizations could also create new distributions of uncertainty and accountability. This article stresses the complexity of governing sociotechnical artificial intelligence principles by standardization. Overall, there is a risk that the governance of artificial intelligence is let to be too shaped by technological solutionism, allowing the standardization of social values (or even human rights) to be carried out in the same manner as that of any other technical product or procedure. (Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
Artificial Intelligence, Algorithms, Transparency, Standards, Governance, Uncertainty, Standardization
in
Big Data and Society
volume
11
issue
1
publisher
SAGE Publications
external identifiers
  • scopus:85185889533
ISSN
2053-9517
DOI
10.1177/20539517241234298
project
AI in the Name of the Common Good -
 Relations of data, AI and humans in health and public sector
AIR Lund - Artificially Intelligent use of Registers
language
English
LU publication?
yes
id
bcd931ef-708e-449b-abd0-c96397c57583
date added to LUP
2024-01-05 22:45:37
date last changed
2024-03-19 12:15:37
@article{bcd931ef-708e-449b-abd0-c96397c57583,
  abstract     = {{Standards are put forward as important means to turn the ideals of ethical and responsible artificial intelligence into practice. One principle targeted for standardization is transparency. This article attends to the tension between standardization and transparency, by combining a theoretical exploration of these concepts with an empirical analysis of standardizations of artificial intelligence transparency. Conceptually, standards are underpinned by goals of stability and solidification, while transparency is considered a flexible see-through quality. In addition, artificial intelligence-technologies are depicted as ‘black boxed’, complex and in flux. Transparency as a solution for ethical artificial intelligence has, however, been problematized. In the empirical sample of standardizations, transparency is largely presented as a static, measurable, and straightforward information transfer, or as a window to artificial intelligence use. The standards are furthermore described as pioneering and able to shape technological futures, while their similarities suggest that artificial intelligence translucencies are already stabilizing into similar arrangements. To rely heavily upon standardization to govern artificial intelligence transparency still risks allocating rule-making to non-democratic processes, and while intended to bring clarity, the standardizations could also create new distributions of uncertainty and accountability. This article stresses the complexity of governing sociotechnical artificial intelligence principles by standardization. Overall, there is a risk that the governance of artificial intelligence is let to be too shaped by technological solutionism, allowing the standardization of social values (or even human rights) to be carried out in the same manner as that of any other technical product or procedure.}},
  author       = {{Högberg, Charlotte}},
  issn         = {{2053-9517}},
  keywords     = {{Artificial Intelligence; Algorithms; Transparency; Standards; Governance; Uncertainty; Standardization}},
  language     = {{eng}},
  month        = {{02}},
  number       = {{1}},
  publisher    = {{SAGE Publications}},
  series       = {{Big Data and Society}},
  title        = {{Stabilizing Translucencies : Governing AI transparency by standardization}},
  url          = {{http://dx.doi.org/10.1177/20539517241234298}},
  doi          = {{10.1177/20539517241234298}},
  volume       = {{11}},
  year         = {{2024}},
}