Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

Why Better Information Does Not Ensure Better Decisions Evidence from AI-Supported Clinical Diagnosis

Rosenbacke, Victor LU (2026) NEKH02 20252
Department of Economics
Abstract
Decision-support systems based on artificial intelligence (AI) are increasingly implemented in everyday societal contexts, including high-stakes domains such as finance, medicine, and law. While these systems play an expanding role in shaping important decisions under uncertainty, a fundamental question concerns how algorithmic advice affects the behavior of human decision-makers. Despite this, most AI systems are still primarily evaluated based on accuracy alone, rather than on how their outputs influence users’ understanding, reasoning, and responses in real-world decision-making settings. To understand these implications more precisely, it is necessary to examine how people actually revise their decisions when AI advice contradicts or... (More)
Decision-support systems based on artificial intelligence (AI) are increasingly implemented in everyday societal contexts, including high-stakes domains such as finance, medicine, and law. While these systems play an expanding role in shaping important decisions under uncertainty, a fundamental question concerns how algorithmic advice affects the behavior of human decision-makers. Despite this, most AI systems are still primarily evaluated based on accuracy alone, rather than on how their outputs influence users’ understanding, reasoning, and responses in real-world decision-making settings. To understand these implications more precisely, it is necessary to examine how people actually revise their decisions when AI advice contradicts or confirms their initial judgment.
Human–AI diagnostic systems are typically evaluated by improvements in mean accuracy, yet little is known about how clinicians revise decisions when confronted with correct or incorrect model outputs. Using 4905 dermatologist decisions collected across unaided, AI- assisted, and explainable-AI (XAI) conditions, we identify a structural mechanism that governs human–AI collaboration. Decision revision was driven not by correctness but by the presence of conflict between clinician and model, this can be interpreted as conflict increasing information salience and triggering belief updating, whereas agreement suppresses posterior revision.
When the AI agreed with clinicians, whether correctly or incorrectly, revision was rare, producing strong cognitive confirmation inertia. When the AI disagreed, clinicians frequently changed their decisions, but with markedly different consequences: conflict with correct AI outputs produced substantial accuracy gains, whereas conflict with incorrect AI outputs produced large accuracy losses. Diagnostic expertise modulated these patterns only partially: high-performing clinicians benefited more from true conflict but remained vulnerable to misleading AI in false conflict, and none of the quartiles reliably detected errors in false confirmation. As a result, AI support increased performance variance, amplifying both corrections and high-consequence errors. These patterns reveal a mechanism that extends beyond clinical diagnosis, shaping how AI guidance alters human decision-making under uncertainty.
More generally, this suggests a micro-level behavioral effect of decision-support systems that is not domain-specific. From an economic perspective, the primary impact may lie less in

average accuracy than in how such systems reshape learning, incentives, and the distribution of errors under uncertainty. If advice systematically alters confidence or reliance, it can change variance and tail risk even when mean performance improves, a dynamic likely to matter across finance, law, and other high-stakes decision contexts.
These findings indicate that clinical safety depends less on the accuracy of the AI system itself and more on the interaction between human and algorithmic judgments, specifically whether they agree or conflict, and how such agreement or disagreement shapes clinicians’ decisions. Effective governance will therefore require interaction designs that shape how people think with decision-support tools and cognitive friction, preventing automatic agreement and supporting deliberate reasoning in critical decisions. (Less)
Please use this url to cite or link to this publication:
author
Rosenbacke, Victor LU
supervisor
organization
course
NEKH02 20252
year
type
M2 - Bachelor Degree
subject
keywords
Explainable AI (XAI), Clinical diagnosis, Decision uncertainty, Information asymmetry
language
English
id
9221420
date added to LUP
2026-02-04 08:22:19
date last changed
2026-02-04 08:22:19
@misc{9221420,
  abstract     = {{Decision-support systems based on artificial intelligence (AI) are increasingly implemented in everyday societal contexts, including high-stakes domains such as finance, medicine, and law. While these systems play an expanding role in shaping important decisions under uncertainty, a fundamental question concerns how algorithmic advice affects the behavior of human decision-makers. Despite this, most AI systems are still primarily evaluated based on accuracy alone, rather than on how their outputs influence users’ understanding, reasoning, and responses in real-world decision-making settings. To understand these implications more precisely, it is necessary to examine how people actually revise their decisions when AI advice contradicts or confirms their initial judgment.
Human–AI diagnostic systems are typically evaluated by improvements in mean accuracy, yet little is known about how clinicians revise decisions when confronted with correct or incorrect model outputs. Using 4905 dermatologist decisions collected across unaided, AI- assisted, and explainable-AI (XAI) conditions, we identify a structural mechanism that governs human–AI collaboration. Decision revision was driven not by correctness but by the presence of conflict between clinician and model, this can be interpreted as conflict increasing information salience and triggering belief updating, whereas agreement suppresses posterior revision.
When the AI agreed with clinicians, whether correctly or incorrectly, revision was rare, producing strong cognitive confirmation inertia. When the AI disagreed, clinicians frequently changed their decisions, but with markedly different consequences: conflict with correct AI outputs produced substantial accuracy gains, whereas conflict with incorrect AI outputs produced large accuracy losses. Diagnostic expertise modulated these patterns only partially: high-performing clinicians benefited more from true conflict but remained vulnerable to misleading AI in false conflict, and none of the quartiles reliably detected errors in false confirmation. As a result, AI support increased performance variance, amplifying both corrections and high-consequence errors. These patterns reveal a mechanism that extends beyond clinical diagnosis, shaping how AI guidance alters human decision-making under uncertainty.
More generally, this suggests a micro-level behavioral effect of decision-support systems that is not domain-specific. From an economic perspective, the primary impact may lie less in

average accuracy than in how such systems reshape learning, incentives, and the distribution of errors under uncertainty. If advice systematically alters confidence or reliance, it can change variance and tail risk even when mean performance improves, a dynamic likely to matter across finance, law, and other high-stakes decision contexts.
These findings indicate that clinical safety depends less on the accuracy of the AI system itself and more on the interaction between human and algorithmic judgments, specifically whether they agree or conflict, and how such agreement or disagreement shapes clinicians’ decisions. Effective governance will therefore require interaction designs that shape how people think with decision-support tools and cognitive friction, preventing automatic agreement and supporting deliberate reasoning in critical decisions.}},
  author       = {{Rosenbacke, Victor}},
  language     = {{eng}},
  note         = {{Student Paper}},
  title        = {{Why Better Information Does Not Ensure Better Decisions Evidence from AI-Supported Clinical Diagnosis}},
  year         = {{2026}},
}