Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

User Perceptions of Trust in Generative AI for Healthcare Advice

Eliasson, Gustav LU and Gullström, Teo LU (2025) INFM10 20251
Department of Informatics
Abstract
With ongoing strains and disparities in the healthcare system, Generative AI applications are increasingly viewed as promising tools to alleviate pressure in this sector. Trust is seen as a critical enabler for the integration of GenAI tools. However, this area remains largely unexplored, with limited research examining user-perceived trust. This study addresses this gap by adopting a qualitative approach to explore user perceptions of trust in GenAI tools, specifically text-based LLMs, for providing healthcare advice. Drawing on semi-structured interviews with participants aged 18–34, this study uncovers the complexities of trust. Find-ings suggest that user-perceived trust is shaped by the systems attributes like transparency,... (More)
With ongoing strains and disparities in the healthcare system, Generative AI applications are increasingly viewed as promising tools to alleviate pressure in this sector. Trust is seen as a critical enabler for the integration of GenAI tools. However, this area remains largely unexplored, with limited research examining user-perceived trust. This study addresses this gap by adopting a qualitative approach to explore user perceptions of trust in GenAI tools, specifically text-based LLMs, for providing healthcare advice. Drawing on semi-structured interviews with participants aged 18–34, this study uncovers the complexities of trust. Find-ings suggest that user-perceived trust is shaped by the systems attributes like transparency, explainability and responsiveness, as well as individual aspects such as prior experience, perceived control, and the nature of the health concern. While participants generally viewed LLM-based tools as accessible and useful, their trust was dynamic, influenced by context, familiarity, and the perceived reliability of the tool. These findings highlight the importance of educating users, transparent AI behaviour, and responsible integration into healthcare. As the utilisation of LLMs and GenAI continue to increase in sensitive domains, it becomes increasingly important to continue researching the subject of trust, to ensure a safe and controlled integration of these applications. (Less)
Please use this url to cite or link to this publication:
author
Eliasson, Gustav LU and Gullström, Teo LU
supervisor
organization
alternative title
A Qualitative Study Examining How Users Perceive the Trustworthiness of Generative AI Tools, Specifically LLMs, for Providing Healthcare Advice
course
INFM10 20251
year
type
H1 - Master's Degree (One Year)
subject
keywords
Generative AI, GenAI, Artificial Intelligence, AI, Large Language Model, LLM, Trust, Trustworthiness, Healthcare, Medical Care, User, Advice.
language
English
id
9202761
date added to LUP
2025-06-19 21:47:08
date last changed
2025-06-19 21:47:08
@misc{9202761,
  abstract     = {{With ongoing strains and disparities in the healthcare system, Generative AI applications are increasingly viewed as promising tools to alleviate pressure in this sector. Trust is seen as a critical enabler for the integration of GenAI tools. However, this area remains largely unexplored, with limited research examining user-perceived trust. This study addresses this gap by adopting a qualitative approach to explore user perceptions of trust in GenAI tools, specifically text-based LLMs, for providing healthcare advice. Drawing on semi-structured interviews with participants aged 18–34, this study uncovers the complexities of trust. Find-ings suggest that user-perceived trust is shaped by the systems attributes like transparency, explainability and responsiveness, as well as individual aspects such as prior experience, perceived control, and the nature of the health concern. While participants generally viewed LLM-based tools as accessible and useful, their trust was dynamic, influenced by context, familiarity, and the perceived reliability of the tool. These findings highlight the importance of educating users, transparent AI behaviour, and responsible integration into healthcare. As the utilisation of LLMs and GenAI continue to increase in sensitive domains, it becomes increasingly important to continue researching the subject of trust, to ensure a safe and controlled integration of these applications.}},
  author       = {{Eliasson, Gustav and Gullström, Teo}},
  language     = {{eng}},
  note         = {{Student Paper}},
  title        = {{User Perceptions of Trust in Generative AI for Healthcare Advice}},
  year         = {{2025}},
}