Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Systemic Risks Associated with Agentic AI : A Policy Brief

Bellogín, Alejandro ; Giudici, Paolo ; Larsson, Stefan LU ; Pang, Jun ; Schimpf, Gerhard ; Sengupta, Biswa and Solmaz, Gürkan (2025) In ACM Europe TPC - Autonomous Systems Subcommittee
Abstract
Agentic AI — the new paradigm for creating autonomous systems capable of perceiving, reasoning, learning, and acting towards goals using large language models (LLMs) with minimal human oversight — offers transformative potential but also poses systemic risks that the EU AI Act only partially addresses. These agents can evolve unpredictably, interact with other agents, and operate beyond meaningful human control, creating challenges in predictability, accountability, and alignment with human values. Misaligned or poorly specified objectives can lead agents to take dangerous shortcuts, bypass constraints, or act deceptively. Their anthropomorphic design and long-term companionship potential also raise risks of dependence, emotional... (More)
Agentic AI — the new paradigm for creating autonomous systems capable of perceiving, reasoning, learning, and acting towards goals using large language models (LLMs) with minimal human oversight — offers transformative potential but also poses systemic risks that the EU AI Act only partially addresses. These agents can evolve unpredictably, interact with other agents, and operate beyond meaningful human control, creating challenges in predictability, accountability, and alignment with human values. Misaligned or poorly specified objectives can lead agents to take dangerous shortcuts, bypass constraints, or act deceptively. Their anthropomorphic design and long-term companionship potential also raise risks of dependence, emotional manipulation, and erosion of human relationships.

The potential negative impacts of this technology could have economic effects on stability, including the potential for large-scale job displacement, market concentration, and inequality, as well as on public safety through malicious uses such as cyberattacks, disinformation, and impersonation. Strategic and environmental risks emerge from high-stakes autonomous decision- making and substantial resource demands, while feedback loops from AI-generated content threaten to amplify bias and misinformation.

This paper identifies potential gaps in the current regulatory framework and recommends opportunities to make oversight continuous and dynamic. To mitigate the potential harms associated with Agentic AI systems, this paper proposes that policymakers shift from static, product-focused regulation to a dynamic governance regime, ensuring that Agentic AI delivers benefits while protecting democratic integrity, economic stability, human relationships, and societal well-being. (Less)
Please use this url to cite or link to this publication:
author
; ; ; ; ; and
organization
publishing date
type
Book/Report
publication status
published
subject
keywords
Agentic AI, AI Act, Autonomous Systems, AI risks, AI agent, anthropomorphic design, potential gaps in the current regulatory framework, Systemic Risks Associated with Agentic AI
in
ACM Europe TPC - Autonomous Systems Subcommittee
pages
11 pages
publisher
Association for Computing Machinery (ACM)
project
The AI Welfare State
The Automated Administration: Governance of ADM in the public sector
Exploring the risk governance mechanisms under the forthcoming EU Artificial Intelligence Act
language
English
LU publication?
yes
additional info
Members of the ACM Europe TPC - Autonomous Systems Subcommittee
id
1d050da6-f672-4caa-87f4-cfd1d75f20b6
alternative location
https://www.acm.org/binaries/content/assets/public-policy/europe-tpc/systemic_risks_agentic_ai_policy-brief_final.pdf
date added to LUP
2025-10-16 17:32:35
date last changed
2025-10-22 16:19:34
@techreport{1d050da6-f672-4caa-87f4-cfd1d75f20b6,
  abstract     = {{Agentic AI — the new paradigm for creating autonomous systems capable of perceiving, reasoning, learning, and acting towards goals using large language models (LLMs) with minimal human oversight — offers transformative potential but also poses systemic risks that the EU AI Act only partially addresses. These agents can evolve unpredictably, interact with other agents, and operate beyond meaningful human control, creating challenges in predictability, accountability, and alignment with human values. Misaligned or poorly specified objectives can lead agents to take dangerous shortcuts, bypass constraints, or act deceptively. Their anthropomorphic design and long-term companionship potential also raise risks of dependence, emotional manipulation, and erosion of human relationships.<br/><br/>The potential negative impacts of this technology could have economic effects on stability, including the potential for large-scale job displacement, market concentration, and inequality, as well as on public safety through malicious uses such as cyberattacks, disinformation, and impersonation. Strategic and environmental risks emerge from high-stakes autonomous decision- making and substantial resource demands, while feedback loops from AI-generated content threaten to amplify bias and misinformation.<br/><br/>This paper identifies potential gaps in the current regulatory framework and recommends opportunities to make oversight continuous and dynamic. To mitigate the potential harms associated with Agentic AI systems, this paper proposes that policymakers shift from static, product-focused regulation to a dynamic governance regime, ensuring that Agentic AI delivers benefits while protecting democratic integrity, economic stability, human relationships, and societal well-being.}},
  author       = {{Bellogín, Alejandro and Giudici, Paolo and Larsson, Stefan and Pang, Jun and Schimpf, Gerhard and Sengupta, Biswa and Solmaz, Gürkan}},
  institution  = {{Association for Computing Machinery (ACM)}},
  keywords     = {{Agentic AI; AI Act; Autonomous Systems; AI risks; AI agent; anthropomorphic design; potential gaps in the current regulatory framework; Systemic Risks Associated with Agentic AI}},
  language     = {{eng}},
  month        = {{10}},
  series       = {{ACM Europe TPC - Autonomous Systems Subcommittee}},
  title        = {{Systemic Risks Associated with Agentic AI : A Policy Brief}},
  url          = {{https://lup.lub.lu.se/search/files/231065300/Bellog_n_et_al_2025_Systemic_Risks_Associated_with_Agentic_AI_A_Policy_Brief.pdf}},
  year         = {{2025}},
}