Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

Generative AI & the Right to Erasure: Can GDPR’s ‘Right to Be Forgotten’ Delete AI Outputs?

Savela, Isabella Shuxin LU (2025) JAEM03 20251
Department of Law
Faculty of Law
Abstract
The rapid development of generative AI systems such as large language models (LLMs) and image generators have revealed critical tensions between technological innovation and existing data protection frameworks. This thesis examines the practical challenges related to the right to erasure established in Article 17 of the GDPR, commonly known as the "right to be forgotten" in the context of generative AI. In contrast to traditional databases, generative AI incorporates personal information into latent representations; distributed, abstract neural network parameters that are difficult to extract or remove.
This thesis identifies a fundamental mismatch between the dynamic, probabilistic structure of AI models and the static deletion... (More)
The rapid development of generative AI systems such as large language models (LLMs) and image generators have revealed critical tensions between technological innovation and existing data protection frameworks. This thesis examines the practical challenges related to the right to erasure established in Article 17 of the GDPR, commonly known as the "right to be forgotten" in the context of generative AI. In contrast to traditional databases, generative AI incorporates personal information into latent representations; distributed, abstract neural network parameters that are difficult to extract or remove.
This thesis identifies a fundamental mismatch between the dynamic, probabilistic structure of AI models and the static deletion requirements of the GDPR by analyzing cross-sectoral legal issues, technical system architectures, and regulatory gaps. Important cases such as Google Spain v. AEPD, Google v. CNIL and GC & Others v. CNIL, demonstrates how current legal interpretations of data privacy do not sufficiently take into account the technological realities of AI. Decentralized AI ecosystems, such as open source models, continue to test the GDPR's accountability framework for controllers and processors, but hidden personal data often persists despite attempts to delete it. Although technical solutions such as federated learning and differential privacy offer some relief, they come with significant trade-offs in terms of cost, model performance, and residual risk. Compliance is further complicated by AI's ability to deduce identity from publicly available, seemingly anonymous data.
This thesis concludes that the current GDPR procedures are not sufficient to ensure that data from AI systems is actually deleted. It highlights reform proposals put forward by experts in the field, such as extending the obligations of joint controllers, developing practices for synthetic data, and utilizing blockchain technology to verify the source of data. Overall, this thesis highlights the need for global cooperation and a flexible legal framework in order to strike a balance between data protection and the disruptive potential of AI. In the digital age, maintaining this balance is essential for both personal freedoms and technological progress. (Less)
Please use this url to cite or link to this publication:
author
Savela, Isabella Shuxin LU
supervisor
organization
course
JAEM03 20251
year
type
H2 - Master's Degree (Two Years)
subject
keywords
AI, GDPR
language
English
id
9204199
date added to LUP
2025-06-23 11:03:11
date last changed
2025-06-23 11:03:11
@misc{9204199,
  abstract     = {{The rapid development of generative AI systems such as large language models (LLMs) and image generators have revealed critical tensions between technological innovation and existing data protection frameworks. This thesis examines the practical challenges related to the right to erasure established in Article 17 of the GDPR, commonly known as the "right to be forgotten" in the context of generative AI. In contrast to traditional databases, generative AI incorporates personal information into latent representations; distributed, abstract neural network parameters that are difficult to extract or remove.
This thesis identifies a fundamental mismatch between the dynamic, probabilistic structure of AI models and the static deletion requirements of the GDPR by analyzing cross-sectoral legal issues, technical system architectures, and regulatory gaps. Important cases such as Google Spain v. AEPD, Google v. CNIL and GC & Others v. CNIL, demonstrates how current legal interpretations of data privacy do not sufficiently take into account the technological realities of AI. Decentralized AI ecosystems, such as open source models, continue to test the GDPR's accountability framework for controllers and processors, but hidden personal data often persists despite attempts to delete it. Although technical solutions such as federated learning and differential privacy offer some relief, they come with significant trade-offs in terms of cost, model performance, and residual risk. Compliance is further complicated by AI's ability to deduce identity from publicly available, seemingly anonymous data.
This thesis concludes that the current GDPR procedures are not sufficient to ensure that data from AI systems is actually deleted. It highlights reform proposals put forward by experts in the field, such as extending the obligations of joint controllers, developing practices for synthetic data, and utilizing blockchain technology to verify the source of data. Overall, this thesis highlights the need for global cooperation and a flexible legal framework in order to strike a balance between data protection and the disruptive potential of AI. In the digital age, maintaining this balance is essential for both personal freedoms and technological progress.}},
  author       = {{Savela, Isabella Shuxin}},
  language     = {{eng}},
  note         = {{Student Paper}},
  title        = {{Generative AI & the Right to Erasure: Can GDPR’s ‘Right to Be Forgotten’ Delete AI Outputs?}},
  year         = {{2025}},
}