In Generative AI We Trust: Measuring the Potential for Deception in LLM-Generated Health Information Using Computational Content Analysis
(2025) SIMZ51 20251Graduate School
- Abstract
- Misleading health information remains a central concern in medical sociology and public health due to its harmful effects on individuals and society. As health information-seeking increasingly shifts to digital platforms, Large Language Models (LLMs)—now commonly used as search engines—have intensified concerns about the spread of health misinformation. This study examines the potential for LLMs to deceive users seeking health-related information and advice, arguing that conventional accuracy-based evaluations are insufficient to assess the misinformation risks these systems pose. I have developed and operationalized a conceptual model of Human–LLM Interaction that situates the Potential for Deception (PoD) in AI-generated health responses... (More)
- Misleading health information remains a central concern in medical sociology and public health due to its harmful effects on individuals and society. As health information-seeking increasingly shifts to digital platforms, Large Language Models (LLMs)—now commonly used as search engines—have intensified concerns about the spread of health misinformation. This study examines the potential for LLMs to deceive users seeking health-related information and advice, arguing that conventional accuracy-based evaluations are insufficient to assess the misinformation risks these systems pose. I have developed and operationalized a conceptual model of Human–LLM Interaction that situates the Potential for Deception (PoD) in AI-generated health responses within a framework of manufactured trustworthiness. This study simulates 204 patient-like interactions with Meta AI (LLaMA-3.1 70B), using controlled variations in prompt style to test how user inputs influence PoD in responses. Meta AI’s outputs were analyzed using Computational Content Analysis (CCA) across a set of linguistic and semantic indicators reflecting two key components of PoD: Personalized Framing and Epistemic Opacity. Results show that LLaMA’s Potential for Deception is positively associated with hedging, technical jargon, poor readability, and, to a lesser extent, emotional tone and alignment expressions. Prompts requesting information elicit significantly more PoD in responses than advice-seeking ones, while positive health assertions in prompts have a marginal effect under certain conditions. Out of the 204 LLaMA responses, 110 (53.9%) show moderate PoD, 36 (17.6%) high, and 16 (7.8%) very high levels of Potential for Deception. These findings highlight that the risks of health misinformation from LLMs are not merely a function of factual inaccuracy, but arise from the ways in which language models simulate human-like expertise and thoughtfulness, which can mislead users. Meta AI’s integration into widely used platforms such as Facebook and WhatsApp—giving it the largest active user base among AI chatbots—makes the assessment of its misinformation risks urgent. (Less)
Please use this url to cite or link to this publication:
http://lup.lub.lu.se/student-papers/record/9200826
- author
- Cardona, Melissa LU
- supervisor
- organization
- course
- SIMZ51 20251
- year
- 2025
- type
- H2 - Master's Degree (Two Years)
- subject
- keywords
- Trust, AI Trustworthiness, Human-Machine Interaction, LLM, Generative AI, Deception, Health Misinformation, Content Analysis, NLP.
- language
- English
- id
- 9200826
- date added to LUP
- 2025-06-25 11:19:39
- date last changed
- 2025-06-25 11:19:39
@misc{9200826, abstract = {{Misleading health information remains a central concern in medical sociology and public health due to its harmful effects on individuals and society. As health information-seeking increasingly shifts to digital platforms, Large Language Models (LLMs)—now commonly used as search engines—have intensified concerns about the spread of health misinformation. This study examines the potential for LLMs to deceive users seeking health-related information and advice, arguing that conventional accuracy-based evaluations are insufficient to assess the misinformation risks these systems pose. I have developed and operationalized a conceptual model of Human–LLM Interaction that situates the Potential for Deception (PoD) in AI-generated health responses within a framework of manufactured trustworthiness. This study simulates 204 patient-like interactions with Meta AI (LLaMA-3.1 70B), using controlled variations in prompt style to test how user inputs influence PoD in responses. Meta AI’s outputs were analyzed using Computational Content Analysis (CCA) across a set of linguistic and semantic indicators reflecting two key components of PoD: Personalized Framing and Epistemic Opacity. Results show that LLaMA’s Potential for Deception is positively associated with hedging, technical jargon, poor readability, and, to a lesser extent, emotional tone and alignment expressions. Prompts requesting information elicit significantly more PoD in responses than advice-seeking ones, while positive health assertions in prompts have a marginal effect under certain conditions. Out of the 204 LLaMA responses, 110 (53.9%) show moderate PoD, 36 (17.6%) high, and 16 (7.8%) very high levels of Potential for Deception. These findings highlight that the risks of health misinformation from LLMs are not merely a function of factual inaccuracy, but arise from the ways in which language models simulate human-like expertise and thoughtfulness, which can mislead users. Meta AI’s integration into widely used platforms such as Facebook and WhatsApp—giving it the largest active user base among AI chatbots—makes the assessment of its misinformation risks urgent.}}, author = {{Cardona, Melissa}}, language = {{eng}}, note = {{Student Paper}}, title = {{In Generative AI We Trust: Measuring the Potential for Deception in LLM-Generated Health Information Using Computational Content Analysis}}, year = {{2025}}, }