Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Three Levels of AI Transparency

Haresamudram, Kashyap LU ; Larsson, Stefan LU and Heintz, Fredrik (2023) In Computer 56(2). p.93-100
Abstract
Transparency is generally cited as a key consideration towards building Trustworthy AI. However, the concept of transparency is fragmented in AI research, often limited to transparency of the algorithm alone. While considerable attempts have been made to expand the scope beyond the algorithm, there has yet to be a holistic approach that includes not only the AI system, but also the user, and society at large. We propose that AI transparency operates on three levels, (1) Algorithmic Transparency, (2) Interaction Transparency, and (3) Social Transparency, all of which need to be considered to build trust in AI. We expand upon these levels using current research directions, and identify research gaps resulting from the conceptual... (More)
Transparency is generally cited as a key consideration towards building Trustworthy AI. However, the concept of transparency is fragmented in AI research, often limited to transparency of the algorithm alone. While considerable attempts have been made to expand the scope beyond the algorithm, there has yet to be a holistic approach that includes not only the AI system, but also the user, and society at large. We propose that AI transparency operates on three levels, (1) Algorithmic Transparency, (2) Interaction Transparency, and (3) Social Transparency, all of which need to be considered to build trust in AI. We expand upon these levels using current research directions, and identify research gaps resulting from the conceptual fragmentation of AI transparency highlighted within the context of the three levels. (Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
Artificial Intelligence, Transparency, Algorithm, Interaction, Society, Governance
in
Computer
volume
56
issue
2
pages
93 - 100
publisher
IEEE Computer Society
external identifiers
  • scopus:85148944520
ISSN
1558-0814
DOI
10.1109/MC.2022.3213181
project
AI Transparency and Consumer Trust
Automated decision-making – Nordic perspectives
language
English
LU publication?
yes
id
9405ca17-0949-443e-8cad-977f38c2df4f
date added to LUP
2022-10-07 18:43:50
date last changed
2025-04-04 15:18:02
@article{9405ca17-0949-443e-8cad-977f38c2df4f,
  abstract     = {{Transparency is generally cited as a key consideration towards building Trustworthy AI. However, the concept of transparency is fragmented in AI research, often limited to transparency of the algorithm alone. While considerable attempts have been made to expand the scope beyond the algorithm, there has yet to be a holistic approach that includes not only the AI system, but also the user, and society at large. We propose that AI transparency operates on three levels, (1) Algorithmic Transparency, (2) Interaction Transparency, and (3) Social Transparency, all of which need to be considered to build trust in AI. We expand upon these levels using current research directions, and identify research gaps resulting from the conceptual fragmentation of AI transparency highlighted within the context of the three levels.}},
  author       = {{Haresamudram, Kashyap and Larsson, Stefan and Heintz, Fredrik}},
  issn         = {{1558-0814}},
  keywords     = {{Artificial Intelligence; Transparency; Algorithm; Interaction; Society; Governance}},
  language     = {{eng}},
  number       = {{2}},
  pages        = {{93--100}},
  publisher    = {{IEEE Computer Society}},
  series       = {{Computer}},
  title        = {{Three Levels of AI Transparency}},
  url          = {{https://lup.lub.lu.se/search/files/126635664/Three_Levels_of_AI_Transparency_Accepted_Version.pdf}},
  doi          = {{10.1109/MC.2022.3213181}},
  volume       = {{56}},
  year         = {{2023}},
}