Can Law Keep Up? The AI Act and the Challenges of Evolving Generative AI and Large Language Models
(2025) JAEM03 20251Department of Law
Faculty of Law
- Abstract
- As generative artificial intelligence (“AI”) rapidly transforms sectors, regulators are racing to keep pace with technologies that evolve faster than legal frameworks can adapt. These systems, powered by large language models or foundation models, are not only capable of generating human-like content but also demonstrate behaviors that shift depending on their data, deployment, and interaction context. Their unpredictability, scale, and adaptability challenge traditional regulatory assumptions. Against this backdrop, the European Union’s AI Act (the “AI Act”) emerges as the most comprehensive legislative effort to date to confront these emerging realities.
With that said, this thesis examines the complex regulatory challenges created by... (More) - As generative artificial intelligence (“AI”) rapidly transforms sectors, regulators are racing to keep pace with technologies that evolve faster than legal frameworks can adapt. These systems, powered by large language models or foundation models, are not only capable of generating human-like content but also demonstrate behaviors that shift depending on their data, deployment, and interaction context. Their unpredictability, scale, and adaptability challenge traditional regulatory assumptions. Against this backdrop, the European Union’s AI Act (the “AI Act”) emerges as the most comprehensive legislative effort to date to confront these emerging realities.
With that said, this thesis examines the complex regulatory challenges created by such adaptive nature of generative AI and critically assesses the AI Act as a response to these challenges. The thesis reveals that generative AI's distinctive characteristics, such as continuous self-learning and evolution, emergent capabilities, unpredictable outputs, and cross-sectoral impacts, create fundamental difficulties for traditional regulatory frameworks.
The AI Act represents a pioneering attempt to regulate these adaptive systems through a tiered, risk-based approach that includes specific provisions for general-purpose AI models, which is meant to refer to and govern generative AI. While the Act incorporates a wide range of innovative regulatory mechanisms, such as risk management system, transparency requirements, post-market monitoring, regulatory sandboxes, and Codes of Practice, significant limitations remain. These include, among other things, the Act's relatively static structure, controversies over its standard-based framework, definitional ambiguities, tensions with open-source development, and implementation challenges.
The thesis concludes that while the AI Act represents an important step forward, a lot of work still needs to be done to make its enforcement and implementation effectively address the unique challenges posed by generative AI. The findings offer future considerations in areas for research such as implementation studies, adaptive governance models, and technical governance tools. (Less)
Please use this url to cite or link to this publication:
http://lup.lub.lu.se/student-papers/record/9196999
- author
- Ngo, Phuong Anh LU
- supervisor
- organization
- course
- JAEM03 20251
- year
- 2025
- type
- H2 - Master's Degree (Two Years)
- subject
- language
- English
- id
- 9196999
- date added to LUP
- 2025-06-18 11:49:39
- date last changed
- 2025-06-18 11:49:39
@misc{9196999, abstract = {{As generative artificial intelligence (“AI”) rapidly transforms sectors, regulators are racing to keep pace with technologies that evolve faster than legal frameworks can adapt. These systems, powered by large language models or foundation models, are not only capable of generating human-like content but also demonstrate behaviors that shift depending on their data, deployment, and interaction context. Their unpredictability, scale, and adaptability challenge traditional regulatory assumptions. Against this backdrop, the European Union’s AI Act (the “AI Act”) emerges as the most comprehensive legislative effort to date to confront these emerging realities. With that said, this thesis examines the complex regulatory challenges created by such adaptive nature of generative AI and critically assesses the AI Act as a response to these challenges. The thesis reveals that generative AI's distinctive characteristics, such as continuous self-learning and evolution, emergent capabilities, unpredictable outputs, and cross-sectoral impacts, create fundamental difficulties for traditional regulatory frameworks. The AI Act represents a pioneering attempt to regulate these adaptive systems through a tiered, risk-based approach that includes specific provisions for general-purpose AI models, which is meant to refer to and govern generative AI. While the Act incorporates a wide range of innovative regulatory mechanisms, such as risk management system, transparency requirements, post-market monitoring, regulatory sandboxes, and Codes of Practice, significant limitations remain. These include, among other things, the Act's relatively static structure, controversies over its standard-based framework, definitional ambiguities, tensions with open-source development, and implementation challenges. The thesis concludes that while the AI Act represents an important step forward, a lot of work still needs to be done to make its enforcement and implementation effectively address the unique challenges posed by generative AI. The findings offer future considerations in areas for research such as implementation studies, adaptive governance models, and technical governance tools.}}, author = {{Ngo, Phuong Anh}}, language = {{eng}}, note = {{Student Paper}}, title = {{Can Law Keep Up? The AI Act and the Challenges of Evolving Generative AI and Large Language Models}}, year = {{2025}}, }