Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Generative Artificial Intelligence and Cybersecurity Risks : Implications for Healthcare Security Based on Real-life Incidents

Sallam, Malik LU ; Al-Mahzoum, Kholoud and Sallam, Mohammed (2024) In Mesopotamian Journal of Artificial Intelligence in Healthcare 2024. p.184-203
Abstract

Background: The potential of generative artificial intelligence (genAI) tools, such as ChatGPT, is being increasingly explored in healthcare settings. However, the same tools also introduce significant cybersecurity risks that could compromise patient safety, data integrity, and institutional trust. This study aimed to examine real-world security breaches involving genAI and extrapolate their potential implications for healthcare settings. Methods: Using a systematic Google News search and a consensus-based approach among the authors, five high-profile genAI breaches were identified and analyzed. These cases included: (1) Data exposure in ChatGPT (OpenAI) due to an open-source library bug (March 2023); (2) Unauthorized data disclosure... (More)

Background: The potential of generative artificial intelligence (genAI) tools, such as ChatGPT, is being increasingly explored in healthcare settings. However, the same tools also introduce significant cybersecurity risks that could compromise patient safety, data integrity, and institutional trust. This study aimed to examine real-world security breaches involving genAI and extrapolate their potential implications for healthcare settings. Methods: Using a systematic Google News search and a consensus-based approach among the authors, five high-profile genAI breaches were identified and analyzed. These cases included: (1) Data exposure in ChatGPT (OpenAI) due to an open-source library bug (March 2023); (2) Unauthorized data disclosure via Samsung’s (Samsung Group) use of ChatGPT (2023); (3) Logical vulnerabilities in Chevrolet (General Motors) AI-powered chatbot resulting in pricing errors (December 2023); (4) Prompt injection vulnerability in Vanna AI (Vanna AI, Inc.) which enabled remote code execution (2024); and (5) the deepfake technology used in a scam targeting the engineering firm Arup (Arup Group Limited), leading to fraudulent transactions (February 2024). Hypothetical healthcare scenarios were constructed based on the five cases, mapping their mechanisms to vulnerabilities in electronic health records (EHRs), clinical decision support systems (CDSS), and patient engagement platforms. Each case was analyzed using the Confidentiality, Integrity, and Availability (CIA) triad of information security to systematically identify vulnerabilities and propose actionable safeguards. Results: The analyzed cases of AI security breaches revealed significant risks to healthcare systems. Confidentiality violations included the potential exposure of sensitive patient records and billing information, extrapolated from incidents such as the ChatGPT data exposure and Samsung’s cases. These identified security breaches raised concerns about privacy violations, identity theft, and non-compliance with regulations such as Health Insurance Portability and Accountability Act (HIPAA). Integrity vulnerabilities were highlighted in Vanna AI's prompt injection flaw incident, with risks of altering patient records, compromising diagnostic algorithms, and misleading CDSS with erroneous recommendations. Similarly, logic errors identified in the Chevrolet case exposed potential risks of inaccurate billing, double-booked appointments, and flawed treatment plans within healthcare contexts. Availability disruptions, observed through system outages and operational suspensions following breaches like the ChatGPT and deepfake cases, can delay access to EHR systems or AI-driven CDSS. Such interruptions would directly impact patient care and create inefficiencies in administrative workflows. Conclusions: Generative AI presents a double-edged sword in healthcare, with transformative potential accompanied by substantial risks. Extrapolation of security breach cases in this study highlighted the urgent need for robust safeguards if genAI is implemented in healthcare settings. To address these vulnerabilities, healthcare institutions must implement strong security protocols, enforce strict data governance, and create AI-specific incident response plans. The balance between genAI-enabled innovation and protection of patient safety and data integrity trust requires proactive safety measures.

(Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
Cybersecurity, Health care, Privacy, Risk, Threat
in
Mesopotamian Journal of Artificial Intelligence in Healthcare
volume
2024
pages
20 pages
publisher
Mesopotamian Academic Press
external identifiers
  • scopus:105020297769
ISSN
3005-365X
DOI
10.58496/MJAIH/2024/019
language
English
LU publication?
yes
additional info
Publisher Copyright: © 2024, Mesopotamian Academic Press. All rights reserved.
id
22eea632-307c-488b-9b48-e2467a59992f
date added to LUP
2026-01-27 13:30:51
date last changed
2026-01-27 13:32:04
@article{22eea632-307c-488b-9b48-e2467a59992f,
  abstract     = {{<p>Background: The potential of generative artificial intelligence (genAI) tools, such as ChatGPT, is being increasingly explored in healthcare settings. However, the same tools also introduce significant cybersecurity risks that could compromise patient safety, data integrity, and institutional trust. This study aimed to examine real-world security breaches involving genAI and extrapolate their potential implications for healthcare settings. Methods: Using a systematic Google News search and a consensus-based approach among the authors, five high-profile genAI breaches were identified and analyzed. These cases included: (1) Data exposure in ChatGPT (OpenAI) due to an open-source library bug (March 2023); (2) Unauthorized data disclosure via Samsung’s (Samsung Group) use of ChatGPT (2023); (3) Logical vulnerabilities in Chevrolet (General Motors) AI-powered chatbot resulting in pricing errors (December 2023); (4) Prompt injection vulnerability in Vanna AI (Vanna AI, Inc.) which enabled remote code execution (2024); and (5) the deepfake technology used in a scam targeting the engineering firm Arup (Arup Group Limited), leading to fraudulent transactions (February 2024). Hypothetical healthcare scenarios were constructed based on the five cases, mapping their mechanisms to vulnerabilities in electronic health records (EHRs), clinical decision support systems (CDSS), and patient engagement platforms. Each case was analyzed using the Confidentiality, Integrity, and Availability (CIA) triad of information security to systematically identify vulnerabilities and propose actionable safeguards. Results: The analyzed cases of AI security breaches revealed significant risks to healthcare systems. Confidentiality violations included the potential exposure of sensitive patient records and billing information, extrapolated from incidents such as the ChatGPT data exposure and Samsung’s cases. These identified security breaches raised concerns about privacy violations, identity theft, and non-compliance with regulations such as Health Insurance Portability and Accountability Act (HIPAA). Integrity vulnerabilities were highlighted in Vanna AI's prompt injection flaw incident, with risks of altering patient records, compromising diagnostic algorithms, and misleading CDSS with erroneous recommendations. Similarly, logic errors identified in the Chevrolet case exposed potential risks of inaccurate billing, double-booked appointments, and flawed treatment plans within healthcare contexts. Availability disruptions, observed through system outages and operational suspensions following breaches like the ChatGPT and deepfake cases, can delay access to EHR systems or AI-driven CDSS. Such interruptions would directly impact patient care and create inefficiencies in administrative workflows. Conclusions: Generative AI presents a double-edged sword in healthcare, with transformative potential accompanied by substantial risks. Extrapolation of security breach cases in this study highlighted the urgent need for robust safeguards if genAI is implemented in healthcare settings. To address these vulnerabilities, healthcare institutions must implement strong security protocols, enforce strict data governance, and create AI-specific incident response plans. The balance between genAI-enabled innovation and protection of patient safety and data integrity trust requires proactive safety measures.</p>}},
  author       = {{Sallam, Malik and Al-Mahzoum, Kholoud and Sallam, Mohammed}},
  issn         = {{3005-365X}},
  keywords     = {{Cybersecurity; Health care; Privacy; Risk; Threat}},
  language     = {{eng}},
  month        = {{12}},
  pages        = {{184--203}},
  publisher    = {{Mesopotamian Academic Press}},
  series       = {{Mesopotamian Journal of Artificial Intelligence in Healthcare}},
  title        = {{Generative Artificial Intelligence and Cybersecurity Risks : Implications for Healthcare Security Based on Real-life Incidents}},
  url          = {{http://dx.doi.org/10.58496/MJAIH/2024/019}},
  doi          = {{10.58496/MJAIH/2024/019}},
  volume       = {{2024}},
  year         = {{2024}},
}