Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Machine Discretion and Democratic Practice

Gill-Pedro, Eduardo LU (2025) In Retfærd: Nordisk juridisk tidsskrift
Abstract
The question of discretion in law is often framed as a technical question – how can we ensure that the decisions produced in administrative processes are accurate, predictable, non-discriminatory, free from bias, etc. If we conceive of discretion with this framing, then replacing human discretion with machine discretion seems eminently sensible.
In this article, I question that framing of discretion. Drawing on Habermas discourse theory of democracy, I argue that, in a democracy, the exercise of discretion is not a technical question, but a political question. It is a communicative practice that involves the giving and taking of reasons by persons oriented by communicative rationality.
With a specific focus on large language models... (More)
The question of discretion in law is often framed as a technical question – how can we ensure that the decisions produced in administrative processes are accurate, predictable, non-discriminatory, free from bias, etc. If we conceive of discretion with this framing, then replacing human discretion with machine discretion seems eminently sensible.
In this article, I question that framing of discretion. Drawing on Habermas discourse theory of democracy, I argue that, in a democracy, the exercise of discretion is not a technical question, but a political question. It is a communicative practice that involves the giving and taking of reasons by persons oriented by communicative rationality.
With a specific focus on large language models (LLMs), I will show that machines are not, at least at the current stage of technological development, capable of engaging in such communicative action. They are not capable of action, because action requires an understanding of causal relations, and current AI systems do not display such understanding. Even if we ascribe intentionality to such machines, the intentional states that we can reasonably ascribe to current AI systems do not have the characteristics necessary for communicative action, given that current AIs are ‘bullshiters’ in the sense advance by Harry Frankfurter. As such they are necessarily insincere and cannot advance the kind of validity claims necessary for communicative action
I will conclude by arguing that, if we replace human decision makers with machines, the possibility of communicative action may disappear, and with it the possibility for meaningful democratic self-rule.
(Less)
Please use this url to cite or link to this publication:
author
organization
publishing date
type
Contribution to journal
publication status
in press
subject
keywords
Discretion, Artificial Intelligence, LLM, Machine Learning, Communicative action, Bullshiter, Jurisprudence, Artificiell intelligens, Allmän rättslära, Omdöme
in
Retfærd: Nordisk juridisk tidsskrift
publisher
DJØF Forlag
ISSN
0105-1121
language
English
LU publication?
yes
id
e78fa705-f1fb-4110-bef7-0286151abdf3
date added to LUP
2025-06-23 15:21:16
date last changed
2025-06-23 15:37:01
@article{e78fa705-f1fb-4110-bef7-0286151abdf3,
  abstract     = {{The question of discretion in law is often framed as a technical question – how can we ensure that the decisions produced in administrative processes are accurate, predictable, non-discriminatory, free from bias, etc. If we conceive of discretion with this framing, then replacing human discretion with machine discretion seems eminently sensible.<br/>In this article, I question that framing of discretion. Drawing on Habermas discourse theory of democracy, I argue that, in a democracy, the exercise of discretion is not a technical question, but a political question. It is a communicative practice that involves the giving and taking of reasons by persons oriented by communicative rationality.<br/>With a specific focus on large language models (LLMs), I will show that machines are not, at least at the current stage of technological development, capable of engaging in such communicative action. They are not capable of action, because action requires an understanding of causal relations, and current AI systems do not display such understanding. Even if we ascribe intentionality to such machines, the intentional states that we can reasonably ascribe to current AI systems do not have the characteristics necessary for communicative action, given that current AIs are ‘bullshiters’ in the sense advance by Harry Frankfurter. As such they are necessarily insincere and cannot advance the kind of validity claims necessary for communicative action<br/>I will conclude by arguing that, if we replace human decision makers with machines, the possibility of communicative action may disappear, and with it the possibility for meaningful democratic self-rule.<br/>}},
  author       = {{Gill-Pedro, Eduardo}},
  issn         = {{0105-1121}},
  keywords     = {{Discretion; Artificial Intelligence; LLM; Machine Learning; Communicative action; Bullshiter; Jurisprudence; Artificiell intelligens; Allmän rättslära; Omdöme}},
  language     = {{eng}},
  month        = {{06}},
  publisher    = {{DJØF Forlag}},
  series       = {{Retfærd: Nordisk juridisk tidsskrift}},
  title        = {{Machine Discretion and Democratic Practice}},
  year         = {{2025}},
}