Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Making Sense of Medical AI : AI transparency and the configuration of expertise

Högberg, Charlotte LU orcid (2025)
Abstract
Artificial Intelligence (AI) technologies are increasingly researched and applied for medical knowledge discovery and to support or automate clinical decision-making. The aim of this thesis is to increase the knowledge on (1) how AI experts, radiologists, and standardizers make sense of AI, in the processes of medical AI development, clinical use and standardization, and (2) how this sensemaking contributes to configurations of AI transparencies and expertise in sociotechnical entanglements. Specifically, I study the research questions: How are AI experts that are involved in developing AI for medical purposes, and medical professionals, making sense of medical AI? How is AI transparency made sense of in standardizations? And how are AI... (More)
Artificial Intelligence (AI) technologies are increasingly researched and applied for medical knowledge discovery and to support or automate clinical decision-making. The aim of this thesis is to increase the knowledge on (1) how AI experts, radiologists, and standardizers make sense of AI, in the processes of medical AI development, clinical use and standardization, and (2) how this sensemaking contributes to configurations of AI transparencies and expertise in sociotechnical entanglements. Specifically, I study the research questions: How are AI experts that are involved in developing AI for medical purposes, and medical professionals, making sense of medical AI? How is AI transparency made sense of in standardizations? And how are AI transparencies made and expertise (re)configured in these processes and sociotechnical entanglements? In studying these questions, I focus on different actors’ practices and reasoning about: ground truthing and transparency in the development of medical AI, integrating and critically engaging with AI in clinical work, and standardization of AI transparency.

Theoretically, this thesis is situated in the fields of Science and Technology Studies (STS), sociology, information science, communication studies and organization studies. An epistemological underpinning of this thesis is the entanglement of social actions and technological and material artefacts. This entails an understanding of the research topic as involving knowledge-making phenomena where the social and the technical, the human and the non-human, are co-constituted in sociotechnical assemblages. Empirically, the research is conducted in three studies using different methods. In the studies, different actors are engaged through: interviews and observations with AI experts working with AI development for medicine and healthcare, a survey study of breast radiologists’ views regarding the integration of AI in breast cancer screening, and a practice-oriented document analysis focusing standard-making of AI transparency. In total, this thesis shows how medical AI is as much a sociotechnical matter as a technical or clinical endeavor. It highlights the complexity of making sense of AI, by different actors’ reasonings and practices and through different processes. Both the role of opacity mitigating practices, as well as the challenges of making AI transparent, are made visible. Moreover, this thesis shows the importance of empirical insights, and stakeholder and context–sensitive approaches to better understand how medical AI is made sense of and how expertise is reconfigured in the process.
(Less)
Please use this url to cite or link to this publication:
author
supervisor
opponent
  • Assoc. Prof. Hojer Bruun, Maja, Aarhus University, Denmark.
organization
publishing date
type
Thesis
publication status
published
subject
keywords
Artificial intellgience, Medicine, Healthcare, Transparency, Expertise, Science and Technology Studies, STS, Machine learning, Information Studies, Medical Sociology, AI, Artificiell intelligens, maskininlärning, medicin, Hälso- och sjukvård, transparens, expertis, Vetenskapssociologi, Teknik, STS
pages
265 pages
publisher
Department of Technology and Society, Lund University
defense location
Lecture Hall E:1406, building E, Klas Anshelms väg 10, Faculty of Engineering LTH, Lund University, Lund.
defense date
2025-06-04 09:00:00
ISBN
9789181044935
9789181044942
project
AI in the Name of the Common Good -
 Relations of data, AI and humans in health and public sector
Mammography Screening with Artificial Intelligence
AIR Lund - Artificially Intelligent use of Registers
language
English
LU publication?
yes
id
a6bcbd84-cd84-4cdb-b905-4f3b4251f879
date added to LUP
2025-04-30 20:55:12
date last changed
2025-05-09 14:13:43
@phdthesis{a6bcbd84-cd84-4cdb-b905-4f3b4251f879,
  abstract     = {{Artificial Intelligence (AI) technologies are increasingly researched and applied for medical knowledge discovery and to support or automate clinical decision-making. The aim of this thesis is to increase the knowledge on (1) how AI experts, radiologists, and standardizers make sense of AI, in the processes of medical AI development, clinical use and standardization, and (2) how this sensemaking contributes to configurations of AI transparencies and expertise in sociotechnical entanglements. Specifically, I study the research questions: How are AI experts that are involved in developing AI for medical purposes, and medical professionals, making sense of medical AI? How is AI transparency made sense of in standardizations? And how are AI transparencies made and expertise (re)configured in these processes and sociotechnical entanglements? In studying these questions, I focus on different actors’ practices and reasoning about: ground truthing and transparency in the development of medical AI, integrating and critically engaging with AI in clinical work, and standardization of AI transparency. <br/><br/>Theoretically, this thesis is situated in the fields of Science and Technology Studies (STS), sociology, information science, communication studies and organization studies. An epistemological underpinning of this thesis is the entanglement of social actions and technological and material artefacts. This entails an understanding of the research topic as involving knowledge-making phenomena where the social and the technical, the human and the non-human, are co-constituted in sociotechnical assemblages. Empirically, the research is conducted in three studies using different methods. In the studies, different actors are engaged through: interviews and observations with AI experts working with AI development for medicine and healthcare, a survey study of breast radiologists’ views regarding the integration of AI in breast cancer screening, and a practice-oriented document analysis focusing standard-making of AI transparency. In total, this thesis shows how medical AI is as much a sociotechnical matter as a technical or clinical endeavor. It highlights the complexity of making sense of AI, by different actors’ reasonings and practices and through different processes. Both the role of opacity mitigating practices, as well as the challenges of making AI transparent, are made visible. Moreover, this thesis shows the importance of empirical insights, and stakeholder and context–sensitive approaches to better understand how medical AI is made sense of and how expertise is reconfigured in the process.<br/>}},
  author       = {{Högberg, Charlotte}},
  isbn         = {{9789181044935}},
  keywords     = {{Artificial intellgience; Medicine; Healthcare; Transparency; Expertise; Science and Technology Studies; STS; Machine learning; Information Studies; Medical Sociology; AI; Artificiell intelligens; maskininlärning; medicin; Hälso- och sjukvård; transparens; expertis; Vetenskapssociologi; Teknik; STS}},
  language     = {{eng}},
  month        = {{05}},
  publisher    = {{Department of Technology and Society, Lund University}},
  school       = {{Lund University}},
  title        = {{Making Sense of Medical AI : AI transparency and the configuration of expertise}},
  url          = {{https://lup.lub.lu.se/search/files/218818086/Hoegberg_2025_Making_Sense_of_Medical_AI_Kappa_without_papers_pdf.pdf}},
  year         = {{2025}},
}