Skip to main content

LUP Student Papers

LUND UNIVERSITY LIBRARIES

AI-verktyg för journalföring - Klassificering och rättsliga konsekvenser enligt AI-förordningen

Strempel, Rebecca LU (2025) JURM02 20252
Department of Law
Faculty of Law
Abstract
In recent years, artificial intelligence has gained increasing importance within the healthcare sector. One emerging application is AI medical scribes, which automate parts of administrative work by converting speech to text, structuring clinical notes, and in some cases suggesting diagnoses or referrals. These tools have the potential to improve the efficiency of healthcare professionals, but at the same time raise complex legal questions. With the new EU Artificial Intelligence Act (2024/1689) (the AI Act), these systems will be classified according to their level of risk, and this assessment will have major implications for both providers and deployers of AI tools, including those used in healthcare.

The aim of this thesis is to... (More)
In recent years, artificial intelligence has gained increasing importance within the healthcare sector. One emerging application is AI medical scribes, which automate parts of administrative work by converting speech to text, structuring clinical notes, and in some cases suggesting diagnoses or referrals. These tools have the potential to improve the efficiency of healthcare professionals, but at the same time raise complex legal questions. With the new EU Artificial Intelligence Act (2024/1689) (the AI Act), these systems will be classified according to their level of risk, and this assessment will have major implications for both providers and deployers of AI tools, including those used in healthcare.

The aim of this thesis is to examine how AI medical scribes should be classified under the AI Act and what legal consequences this entails for providers of these tools and healthcare providers that use them. To answer this question, an EU legal dogmatic method is applied, supplemented by elements of legal analysis.

The AI Act divides AI systems into four risk categories (unacceptable, high, limited, and low risk) with regulatory requirements increasing significantly with the risk level. The thesis shows that for AI systems in healthcare, the decisive boundary lies between high and low risk. AI medical scribes that qualify as medical devices under the MDR are automatically classified as high-risk. A high-risk classification entails extensive requirements regarding documentation, risk management, data governance, human oversight, and CE marking, whereas low-risk systems are essentially left unregulated. This substantial difference in regulation has major practical and economic implications.

The classification issue is however unclear for AI medical scribes that do not qualify as medical devices and operate in the grey zone between administrative and clinical support. The ambiguity arises because a high-risk classification extends beyond MDR products to also include AI systems intended to evaluate eligibility for healthcare services. However, certain simpler systems, such as those performing only narrow and procedural tasks, may be exempted, thus complicating the classification boundary. The analysis shows that simpler medical scribes that merely perform narrow procedural tasks, such as transcription, are likely to be exempt and classified as low risk, while more advanced systems that influence clinical decision making are likely to
be considered high risk. Where the boundary lies remains uncertain, which is problematic given the substantial differences in regulatory obligations. This uncertainty, in turn, risks discouraging innovation and technological development within healthcare.

In conclusion, the thesis finds that the AI Act establishes a necessary structure for safeguarding patient safety and strengthening trust in AI within healthcare, but that the current legal situation remains marked by significant uncertainty. A key future challenge will be to develop clearer guidance, harmonised standards, and a more predictable application of the AI Act within the healthcare sector, allowing patient safety and innovation to be promoted in parallel. (Less)
Abstract (Swedish)
Artificiell intelligens har under de senaste åren fått en allt större betydelse inom hälso- och sjukvården. En framväxande tillämpning är AI-verktyg för journalföring, som automatiserar delar av det administrativa arbetet genom att omvandla tal till text, strukturera anteckningar och i vissa fall föreslå diagnoser eller remisser. Verktygen kan effektivisera vårdpersonalens arbete, men väcker samtidigt komplexa juridiska frågor. I och med EU:s nya AI-förordning (2024/1689) kommer dessa system att klassificeras utifrån risknivå, och bedömningen får stora konsekvenser för både leverantörer och vårdgivare.

Syftet med uppsatsen är att undersöka hur AI-verktyg för journalföring ska klassificeras enligt AI-förordningen och vilka rättsliga... (More)
Artificiell intelligens har under de senaste åren fått en allt större betydelse inom hälso- och sjukvården. En framväxande tillämpning är AI-verktyg för journalföring, som automatiserar delar av det administrativa arbetet genom att omvandla tal till text, strukturera anteckningar och i vissa fall föreslå diagnoser eller remisser. Verktygen kan effektivisera vårdpersonalens arbete, men väcker samtidigt komplexa juridiska frågor. I och med EU:s nya AI-förordning (2024/1689) kommer dessa system att klassificeras utifrån risknivå, och bedömningen får stora konsekvenser för både leverantörer och vårdgivare.

Syftet med uppsatsen är att undersöka hur AI-verktyg för journalföring ska klassificeras enligt AI-förordningen och vilka rättsliga konsekvenser det får för leverantörer och vårdgivare. För att besvara frågeställningen används en EU-rättslig, rättsdogmatisk metod med inslag av rättsanalytisk metod.

AI-förordningen delar in AI-system i fyra risknivåer (otillåten, hög, begränsad och låg risk) där kraven ökar i takt med risknivån. Uppsatsen visar att för AI-system inom vården blir det framför allt gränsen mellan hög och låg risk som är avgörande. AI-verktyg för journalföring som är medicintekniska produkter enligt MDR klassas automatiskt som hög risk. En högriskklassificering medför omfattande krav på dokumentation, riskhantering, datastyrning, mänsklig kontroll och CE-märkning, medan lågrisk-system i princip lämnas oreglerade. Skillnaden mellan dessa nivåer får därför stora praktiska och ekonomiska konsekvenser.

Klassificeringsfrågan är däremot oklar för AI-verktyg för journalföring som inte utgör medicintekniska produkter och befinner sig på gränsen mellan administrativt och kliniskt stöd. Detta beror på att högriskklassificeringen även omfattar andra AI-system som är avsedda för att bedöma tillgång till vård. Samtidigt kan vissa enklare system, som exempelvis endast är avsedda att utföra snäva och processuella uppgifter, undantas, vilket skapar en komplicerad gränsdragning. Analysen visar att enklare journalföringsverktyg som enbart transkriberar tal därmed sannolikt kan undantas och klassas som låg risk, medan mer avancerade system som påverkar det medicinska beslutsunderlaget sannolikt kommer att klassas som hög risk. Var gränsen går är dock oklart, vilket är problematiskt med tanke på de stora skillnaderna i reglering. Denna
osäkerhet riskerar i sin tur att hämma innovation och teknisk utveckling inom vården.

Sammanfattningsvis visar uppsatsen att AI-förordningen skapar en nödvän-
dig struktur för att skydda patientsäkerhet och stärka tilliten till AI i vården, men att rättsläget ännu präglas av betydande osäkerhet. En framtida utmaning blir därför att utveckla tydligare vägledning, harmoniserade standarder och en mer förutsebar tillämpning av förordningen inom vårdsektorn, så att patientsäkerhet och innovation kan främjas parallellt. (Less)
Please use this url to cite or link to this publication:
author
Strempel, Rebecca LU
supervisor
organization
alternative title
AI Medical Scribes - Classification and Legal Consequences under the EU AI Act
course
JURM02 20252
year
type
H3 - Professional qualifications (4 Years - )
subject
keywords
AI-förordningen, AI act, EU-rätt, Hälso- och sjukvård, Medicinsk rätt, Artificiell intelligens, AI, Teknologi, Riskklassificering, Högrisk-AI, AI-verktyg för journalföring, AI Medical Scribes, Medicintekniska produkter, Medicinteknisk programvara, MDR, Förordningen om medicintekniska produkter
language
Swedish
id
9216193
date added to LUP
2026-01-15 13:13:05
date last changed
2026-01-15 13:13:05
@misc{9216193,
  abstract     = {{In recent years, artificial intelligence has gained increasing importance within the healthcare sector. One emerging application is AI medical scribes, which automate parts of administrative work by converting speech to text, structuring clinical notes, and in some cases suggesting diagnoses or referrals. These tools have the potential to improve the efficiency of healthcare professionals, but at the same time raise complex legal questions. With the new EU Artificial Intelligence Act (2024/1689) (the AI Act), these systems will be classified according to their level of risk, and this assessment will have major implications for both providers and deployers of AI tools, including those used in healthcare.

The aim of this thesis is to examine how AI medical scribes should be classified under the AI Act and what legal consequences this entails for providers of these tools and healthcare providers that use them. To answer this question, an EU legal dogmatic method is applied, supplemented by elements of legal analysis.

The AI Act divides AI systems into four risk categories (unacceptable, high, limited, and low risk) with regulatory requirements increasing significantly with the risk level. The thesis shows that for AI systems in healthcare, the decisive boundary lies between high and low risk. AI medical scribes that qualify as medical devices under the MDR are automatically classified as high-risk. A high-risk classification entails extensive requirements regarding documentation, risk management, data governance, human oversight, and CE marking, whereas low-risk systems are essentially left unregulated. This substantial difference in regulation has major practical and economic implications.

The classification issue is however unclear for AI medical scribes that do not qualify as medical devices and operate in the grey zone between administrative and clinical support. The ambiguity arises because a high-risk classification extends beyond MDR products to also include AI systems intended to evaluate eligibility for healthcare services. However, certain simpler systems, such as those performing only narrow and procedural tasks, may be exempted, thus complicating the classification boundary. The analysis shows that simpler medical scribes that merely perform narrow procedural tasks, such as transcription, are likely to be exempt and classified as low risk, while more advanced systems that influence clinical decision making are likely to
be considered high risk. Where the boundary lies remains uncertain, which is problematic given the substantial differences in regulatory obligations. This uncertainty, in turn, risks discouraging innovation and technological development within healthcare.

In conclusion, the thesis finds that the AI Act establishes a necessary structure for safeguarding patient safety and strengthening trust in AI within healthcare, but that the current legal situation remains marked by significant uncertainty. A key future challenge will be to develop clearer guidance, harmonised standards, and a more predictable application of the AI Act within the healthcare sector, allowing patient safety and innovation to be promoted in parallel.}},
  author       = {{Strempel, Rebecca}},
  language     = {{swe}},
  note         = {{Student Paper}},
  title        = {{AI-verktyg för journalföring - Klassificering och rättsliga konsekvenser enligt AI-förordningen}},
  year         = {{2025}},
}