Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Flip or Flop? : The Pacing Problems of European Systemic Risk GPAI Regulation

Larsson, Stefan LU and Hildén, Jockum (2025) XXXIX NORDIC CONFERENCE ON LAW AND IT
Abstract
Drawing from socio-legal studies on technological governance and notions of the pacing problem between law and new technologies, this chapter addresses how generative AI is regulated in the EU AI Act. Specifically, it focuses on implications of the two-tiered approach to General Purpose Artificial intelligence (GPAI) models, where the top tier is adding obligations to producers of so-called “systemic risk” GPAI models. According to the regulation, they can have “negative effects on public health, safety, public security, fundamental rights, or the society as a whole”. While the targeted concerns are grand, the designation of “systemic risk” is primarily linked to how much cumulative computation, measured in floating point operations per... (More)
Drawing from socio-legal studies on technological governance and notions of the pacing problem between law and new technologies, this chapter addresses how generative AI is regulated in the EU AI Act. Specifically, it focuses on implications of the two-tiered approach to General Purpose Artificial intelligence (GPAI) models, where the top tier is adding obligations to producers of so-called “systemic risk” GPAI models. According to the regulation, they can have “negative effects on public health, safety, public security, fundamental rights, or the society as a whole”. While the targeted concerns are grand, the designation of “systemic risk” is primarily linked to how much cumulative computation, measured in floating point operations per second (FLOPS), have been used in the training phase, equating high resource usage with high impact. Assumptions underlying the leap from FLOP levels to societal risk are discussed here. In addition, and alternatively, the Commission may designate systemic risk GPAI based on certain criteria specified in an Annex to the AI Act, equally focused on measurable units such as parameters of the model, tokens used, and number of users, but also more vaguely defined criteria related to the capabilities of the model. The Commission may also change the thresholds used to determine high impact capabilities. In parallel, a voluntary Code of Practice for GPAI has been developed, speaking to that the successful implementation of the AI Act will depend on co-regulatory efforts. The chapter analyses why this particular regulatory design — seemingly flexible, unpredictable, partly voluntary and somewhat random — was chosen, and speculate on possible implications of it. (Less)
Please use this url to cite or link to this publication:
author
and
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
in press
subject
keywords
GPAI, AI Act, General Purpose AI, Agentic AI, FLOPs, AI Governance, Pacing Problem, Systemic Risk, Systemic Risk GPAI, Code of Practice
host publication
Navigating the Frontier – Nordic Yearbook on Law and Informatics 2024-2025
editor
Carey, Samuel
pages
16 pages
publisher
Jure
conference name
XXXIX NORDIC CONFERENCE ON LAW AND IT
conference location
Stockholm, Sweden
conference dates
2024-11-04 - 2024-11-06
project
The Automated Administration: Governance of ADM in the public sector
language
English
LU publication?
yes
id
98428617-b972-4e15-90e7-cc3de05f83dc
date added to LUP
2025-08-14 13:45:58
date last changed
2025-09-29 09:49:35
@inbook{98428617-b972-4e15-90e7-cc3de05f83dc,
  abstract     = {{Drawing from socio-legal studies on technological governance and notions of the pacing problem between law and new technologies, this chapter addresses how generative AI is regulated in the EU AI Act. Specifically, it focuses on implications of the two-tiered approach to General Purpose Artificial intelligence (GPAI) models, where the top tier is adding obligations to producers of so-called “systemic risk” GPAI models. According to the regulation, they can have “negative effects on public health, safety, public security, fundamental rights, or the society as a whole”. While the targeted concerns are grand, the designation of “systemic risk” is primarily linked to how much cumulative computation, measured in floating point operations per second (FLOPS), have been used in the training phase, equating high resource usage with high impact. Assumptions underlying the leap from FLOP levels to societal risk are discussed here. In addition, and alternatively, the Commission may designate systemic risk GPAI based on certain criteria specified in an Annex to the AI Act, equally focused on measurable units such as parameters of the model, tokens used, and number of users, but also more vaguely defined criteria related to the capabilities of the model. The Commission may also change the thresholds used to determine high impact capabilities. In parallel, a voluntary Code of Practice for GPAI has been developed, speaking to that the successful implementation of the AI Act will depend on co-regulatory efforts. The chapter analyses why this particular regulatory design — seemingly flexible, unpredictable, partly voluntary and somewhat random — was chosen, and speculate on possible implications of it.}},
  author       = {{Larsson, Stefan and Hildén, Jockum}},
  booktitle    = {{Navigating the Frontier – Nordic Yearbook on Law and Informatics 2024-2025}},
  editor       = {{Carey, Samuel}},
  keywords     = {{GPAI; AI Act; General Purpose AI; Agentic AI; FLOPs; AI Governance; Pacing Problem; Systemic Risk; Systemic Risk GPAI; Code of Practice}},
  language     = {{eng}},
  month        = {{11}},
  publisher    = {{Jure}},
  title        = {{Flip or Flop? : The Pacing Problems of European Systemic Risk GPAI Regulation}},
  url          = {{https://lup.lub.lu.se/search/files/228440819/Larsson_Hilden_2025_Flip_or_Flop_The_Pacing_Problems_of_European_Systemic_Risk_GPAI_Regulation.pdf}},
  year         = {{2025}},
}