Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Polycentrism, not polemics? Squaring the circle of non-discrimination law, accuracy metrics and public/private interests when addressing AI bias

Teo, Sue Anne LU orcid (2025) In Frontiers in Political Science 7.
Abstract

Lon Fuller famously argued that polycentric issues are not readily amenable to binary and adversarial forms of adjudication. When it comes to resource allocations involving various interested parties, binary polemical forms of decision making may fail to capture the polycentric nature of the dispute, namely the fact that an advantage conferred to one party invariably involves (detrimentally) affecting the interests of others in an interconnected web. This article applies Fuller’s idea in relation to artificial intelligence systems and examines how the human right to equality and non-discrimination takes on a polycentric form in AI-driven decision making and recommendations. This is where bias needs to be managed, including through the... (More)

Lon Fuller famously argued that polycentric issues are not readily amenable to binary and adversarial forms of adjudication. When it comes to resource allocations involving various interested parties, binary polemical forms of decision making may fail to capture the polycentric nature of the dispute, namely the fact that an advantage conferred to one party invariably involves (detrimentally) affecting the interests of others in an interconnected web. This article applies Fuller’s idea in relation to artificial intelligence systems and examines how the human right to equality and non-discrimination takes on a polycentric form in AI-driven decision making and recommendations. This is where bias needs to be managed, including through the specification of impacted groups, error types, and acceptable error rates disaggregated by groups. For example, while the typical human rights response to non-discrimination claims involves the adversarial assertion of the rights of protected groups, this response is inadequate and does not go far enough in addressing polycentric interests- where groups are differentially impacted through debiasing measures when designing for ‘fair AI’. Instead, the article frontloads the contention that a triangulation of polycentric interests, namely: respecting demands of the law; system accuracy and the commercial or public interest pursued by the AI system, has to be acknowledged. In connecting theory with practice, the article draws illustrative examples from the use of AI within migration and border management and offensive and hate speech detection within online platforms to examine how these polycentric interests are considered when addressing AI bias. It demonstrates that the problem of bias in AI can be managed, though not eliminated, through social policy choices and ex-ante tools such as human rights impact assessments that assess the contesting interests impacted by algorithmic design and which enable the accounting of statistical impacts of polycentrism. However, this has to be complemented with transparency and other backstop measures of accountability to close techno-legal gaps.

(Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
AI bias, equality, human rights, non-discrimination law, polycentric
in
Frontiers in Political Science
volume
7
article number
1645160
publisher
Frontiers Media S. A.
external identifiers
  • scopus:105024239444
ISSN
2673-3145
DOI
10.3389/fpos.2025.1645160
language
English
LU publication?
yes
id
9dde2843-2aef-4cd4-b958-c4b7ae36a3bd
date added to LUP
2025-10-01 17:01:30
date last changed
2026-01-07 08:33:00
@article{9dde2843-2aef-4cd4-b958-c4b7ae36a3bd,
  abstract     = {{<p>Lon Fuller famously argued that polycentric issues are not readily amenable to binary and adversarial forms of adjudication. When it comes to resource allocations involving various interested parties, binary polemical forms of decision making may fail to capture the polycentric nature of the dispute, namely the fact that an advantage conferred to one party invariably involves (detrimentally) affecting the interests of others in an interconnected web. This article applies Fuller’s idea in relation to artificial intelligence systems and examines how the human right to equality and non-discrimination takes on a polycentric form in AI-driven decision making and recommendations. This is where bias needs to be managed, including through the specification of impacted groups, error types, and acceptable error rates disaggregated by groups. For example, while the typical human rights response to non-discrimination claims involves the adversarial assertion of the rights of protected groups, this response is inadequate and does not go far enough in addressing polycentric interests- where groups are differentially impacted through debiasing measures when designing for ‘fair AI’. Instead, the article frontloads the contention that a triangulation of polycentric interests, namely: respecting demands of the law; system accuracy and the commercial or public interest pursued by the AI system, has to be acknowledged. In connecting theory with practice, the article draws illustrative examples from the use of AI within migration and border management and offensive and hate speech detection within online platforms to examine how these polycentric interests are considered when addressing AI bias. It demonstrates that the problem of bias in AI can be managed, though not eliminated, through social policy choices and ex-ante tools such as human rights impact assessments that assess the contesting interests impacted by algorithmic design and which enable the accounting of statistical impacts of polycentrism. However, this has to be complemented with transparency and other backstop measures of accountability to close techno-legal gaps.</p>}},
  author       = {{Teo, Sue Anne}},
  issn         = {{2673-3145}},
  keywords     = {{AI bias; equality; human rights; non-discrimination law; polycentric}},
  language     = {{eng}},
  publisher    = {{Frontiers Media S. A.}},
  series       = {{Frontiers in Political Science}},
  title        = {{Polycentrism, not polemics? Squaring the circle of non-discrimination law, accuracy metrics and public/private interests when addressing AI bias}},
  url          = {{http://dx.doi.org/10.3389/fpos.2025.1645160}},
  doi          = {{10.3389/fpos.2025.1645160}},
  volume       = {{7}},
  year         = {{2025}},
}