Fundamental Rights Impact Assessments in the EU’s AI Act : A teleological and contextual analysis of the obligations of deployers
(2025) In European Journal of Law and Technology- Abstract
- This article examines the obligation for public-sector deployers to conduct Fundamental Rights Impact Assessments (FRIAs) under Article 27 of the EU Artificial Intelligence Act (AI Act). The article argues that the FRIA obligation functions as a minimum harmonisation standard, granting Member State authorities discretion to go beyond the AI Act's baseline and conduct more rigorous fundamental rights scrutiny prior to deployment. In contrast, the AI Act's obligations on providers of high-risk AI systems are fully harmonised, primarily structured around internal market objectives and designed to facilitate the free circulation of AI technologies across the EU. By drawing a distinction between these two regulatory logics, the article... (More)
- This article examines the obligation for public-sector deployers to conduct Fundamental Rights Impact Assessments (FRIAs) under Article 27 of the EU Artificial Intelligence Act (AI Act). The article argues that the FRIA obligation functions as a minimum harmonisation standard, granting Member State authorities discretion to go beyond the AI Act's baseline and conduct more rigorous fundamental rights scrutiny prior to deployment. In contrast, the AI Act's obligations on providers of high-risk AI systems are fully harmonised, primarily structured around internal market objectives and designed to facilitate the free circulation of AI technologies across the EU. By drawing a distinction between these two regulatory logics, the article demonstrates that decisions by public authorities not to deploy AI, or to conduct broader or deeper FRIA impact assessments than required by Article 27, fall outside the scope of EU law. Drawing a parallel with free movement of goods caselaw, the article argues that such decisions are akin to ‘selling arrangements. Consequently, they are not subject to challenge under internal market or fundamental rights provisions of EU law by affected providers. The article concludes that the FRIA mechanism offers Member States a critical lever to secure fundamental rights and foster human-centric and trustworthy AI. (Less)
Please use this url to cite or link to this publication:
https://lup.lub.lu.se/record/e7e6be85-c198-4c8c-ba0a-6c6ae482a306
- author
- Gill-Pedro, Eduardo LU
- organization
- publishing date
- 2025-11-24
- type
- Contribution to journal
- publication status
- in press
- subject
- keywords
- Artificial intelligence, EU Law, AI Act, Fundamental Rights, Impact assessment, FRIA, Artificiell intelligens, EU-rätt
- in
- European Journal of Law and Technology
- language
- English
- LU publication?
- yes
- id
- e7e6be85-c198-4c8c-ba0a-6c6ae482a306
- date added to LUP
- 2025-12-16 21:45:09
- date last changed
- 2025-12-17 11:28:09
@article{e7e6be85-c198-4c8c-ba0a-6c6ae482a306,
abstract = {{This article examines the obligation for public-sector deployers to conduct Fundamental Rights Impact Assessments (FRIAs) under Article 27 of the EU Artificial Intelligence Act (AI Act). The article argues that the FRIA obligation functions as a minimum harmonisation standard, granting Member State authorities discretion to go beyond the AI Act's baseline and conduct more rigorous fundamental rights scrutiny prior to deployment. In contrast, the AI Act's obligations on providers of high-risk AI systems are fully harmonised, primarily structured around internal market objectives and designed to facilitate the free circulation of AI technologies across the EU. By drawing a distinction between these two regulatory logics, the article demonstrates that decisions by public authorities not to deploy AI, or to conduct broader or deeper FRIA impact assessments than required by Article 27, fall outside the scope of EU law. Drawing a parallel with free movement of goods caselaw, the article argues that such decisions are akin to ‘selling arrangements. Consequently, they are not subject to challenge under internal market or fundamental rights provisions of EU law by affected providers. The article concludes that the FRIA mechanism offers Member States a critical lever to secure fundamental rights and foster human-centric and trustworthy AI.}},
author = {{Gill-Pedro, Eduardo}},
keywords = {{Artificial intelligence; EU Law; AI Act; Fundamental Rights; Impact assessment; FRIA; Artificiell intelligens; EU-rätt}},
language = {{eng}},
month = {{11}},
series = {{European Journal of Law and Technology}},
title = {{Fundamental Rights Impact Assessments in the EU’s AI Act : A teleological and contextual analysis of the obligations of deployers}},
year = {{2025}},
}