Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

High-risk AI transparency? : On qualified transparency mandates for oversight bodies under the EU AI Act

Söderlund, Kasia LU (2025) In Technology and Regulation 2025.
Abstract
The legal opacity of AI technologies has long posed challenges in addressing algorithmic harms, as secrecy enables companies to retain competitive advantages while limiting public scrutiny. In response, ideas such as qualified transparency have been proposed to provide AI accountability within the confidentiality constraints. With the introduction of the EU AI Act, the foundations for human-centric and trustworthy AI have been established. The framework sets regulatory requirements for certain AI technologies and grants oversight bodies broad transparency mandates to enforce the new rules. This paper examines these transparency mandates under the AI Act and argues that it effectively implements qualified transparency, which may potentially... (More)
The legal opacity of AI technologies has long posed challenges in addressing algorithmic harms, as secrecy enables companies to retain competitive advantages while limiting public scrutiny. In response, ideas such as qualified transparency have been proposed to provide AI accountability within the confidentiality constraints. With the introduction of the EU AI Act, the foundations for human-centric and trustworthy AI have been established. The framework sets regulatory requirements for certain AI technologies and grants oversight bodies broad transparency mandates to enforce the new rules. This paper examines these transparency mandates under the AI Act and argues that it effectively implements qualified transparency, which may potentially mitigate the problem of AI opacity. Nevertheless, several challenges remain in achieving the Act’s policy objectives. (Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
AI Act, AI Transparency, AI, qualified transparency, Artificial Intelligence
in
Technology and Regulation
volume
2025
ISSN
2666-139X
DOI
10.71265/6bedar76
language
English
LU publication?
yes
id
35c479e5-7439-41a8-9347-b6029e8e18f8
date added to LUP
2025-06-16 00:24:51
date last changed
2025-06-17 16:15:57
@article{35c479e5-7439-41a8-9347-b6029e8e18f8,
  abstract     = {{The legal opacity of AI technologies has long posed challenges in addressing algorithmic harms, as secrecy enables companies to retain competitive advantages while limiting public scrutiny. In response, ideas such as qualified transparency have been proposed to provide AI accountability within the confidentiality constraints. With the introduction of the EU AI Act, the foundations for human-centric and trustworthy AI have been established. The framework sets regulatory requirements for certain AI technologies and grants oversight bodies broad transparency mandates to enforce the new rules. This paper examines these transparency mandates under the AI Act and argues that it effectively implements qualified transparency, which may potentially mitigate the problem of AI opacity. Nevertheless, several challenges remain in achieving the Act’s policy objectives.}},
  author       = {{Söderlund, Kasia}},
  issn         = {{2666-139X}},
  keywords     = {{AI Act; AI Transparency; AI; qualified transparency; Artificial Intelligence}},
  language     = {{eng}},
  month        = {{06}},
  series       = {{Technology and Regulation}},
  title        = {{High-risk AI transparency? : On qualified transparency mandates for oversight bodies under the EU AI Act}},
  url          = {{http://dx.doi.org/10.71265/6bedar76}},
  doi          = {{10.71265/6bedar76}},
  volume       = {{2025}},
  year         = {{2025}},
}