Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography : Comparison With 101 Radiologists

Rodriguez-Ruiz, Alejandro ; Lång, Kristina LU ; Gubern-Merida, Albert ; Broeders, Mireille ; Gennaro, Gisella ; Clauser, Paola ; Helbich, Thomas H ; Chevalier, Margarita ; Tan, Tao and Mertelmeier, Thomas , et al. (2019) In Journal of the National Cancer Institute 111(9). p.916-922
Abstract

BACKGROUND: Artificial intelligence (AI) systems performing at radiologist-like levels in the evaluation of digital mammography (DM) would improve breast cancer screening accuracy and efficiency. We aimed to compare the stand-alone performance of an AI system to that of radiologists in detecting breast cancer in DM.

METHODS: Nine multi-reader, multi-case study datasets previously used for different research purposes in seven countries were collected. Each dataset consisted of DM exams acquired with systems from four different vendors, multiple radiologists' assessments per exam, and ground truth verified by histopathological analysis or follow-up, yielding a total of 2652 exams (653 malignant) and interpretations by 101... (More)

BACKGROUND: Artificial intelligence (AI) systems performing at radiologist-like levels in the evaluation of digital mammography (DM) would improve breast cancer screening accuracy and efficiency. We aimed to compare the stand-alone performance of an AI system to that of radiologists in detecting breast cancer in DM.

METHODS: Nine multi-reader, multi-case study datasets previously used for different research purposes in seven countries were collected. Each dataset consisted of DM exams acquired with systems from four different vendors, multiple radiologists' assessments per exam, and ground truth verified by histopathological analysis or follow-up, yielding a total of 2652 exams (653 malignant) and interpretations by 101 radiologists (28 296 independent interpretations). An AI system analyzed these exams yielding a level of suspicion of cancer present between 1 and 10. The detection performance between the radiologists and the AI system was compared using a noninferiority null hypothesis at a margin of 0.05.

RESULTS: The performance of the AI system was statistically noninferior to that of the average of the 101 radiologists. The AI system had a 0.840 (95% confidence interval [CI] = 0.820 to 0.860) area under the ROC curve and the average of the radiologists was 0.814 (95% CI = 0.787 to 0.841) (difference 95% CI = -0.003 to 0.055). The AI system had an AUC higher than 61.4% of the radiologists.

CONCLUSIONS: The evaluated AI system achieved a cancer detection accuracy comparable to an average breast radiologist in this retrospective setting. Although promising, the performance and impact of such a system in a screening setting needs further investigation.

(Less)
Please use this url to cite or link to this publication:
author
; ; ; ; ; ; ; ; and , et al. (More)
; ; ; ; ; ; ; ; ; ; ; ; ; and (Less)
organization
publishing date
type
Contribution to journal
publication status
published
subject
in
Journal of the National Cancer Institute
volume
111
issue
9
pages
916 - 922
publisher
Oxford University Press
external identifiers
  • scopus:85064590651
  • pmid:30834436
ISSN
1460-2105
DOI
10.1093/jnci/djy222
language
English
LU publication?
yes
additional info
© The Author(s) 2019. Published by Oxford University Press. All rights reserved. For permissions, please email: journals.permissions@oup.com.
id
e7d79172-a355-444c-9eae-36958b3648f3
date added to LUP
2019-04-06 19:11:56
date last changed
2024-04-16 03:03:12
@article{e7d79172-a355-444c-9eae-36958b3648f3,
  abstract     = {{<p>BACKGROUND: Artificial intelligence (AI) systems performing at radiologist-like levels in the evaluation of digital mammography (DM) would improve breast cancer screening accuracy and efficiency. We aimed to compare the stand-alone performance of an AI system to that of radiologists in detecting breast cancer in DM.</p><p>METHODS: Nine multi-reader, multi-case study datasets previously used for different research purposes in seven countries were collected. Each dataset consisted of DM exams acquired with systems from four different vendors, multiple radiologists' assessments per exam, and ground truth verified by histopathological analysis or follow-up, yielding a total of 2652 exams (653 malignant) and interpretations by 101 radiologists (28 296 independent interpretations). An AI system analyzed these exams yielding a level of suspicion of cancer present between 1 and 10. The detection performance between the radiologists and the AI system was compared using a noninferiority null hypothesis at a margin of 0.05.</p><p>RESULTS: The performance of the AI system was statistically noninferior to that of the average of the 101 radiologists. The AI system had a 0.840 (95% confidence interval [CI] = 0.820 to 0.860) area under the ROC curve and the average of the radiologists was 0.814 (95% CI = 0.787 to 0.841) (difference 95% CI = -0.003 to 0.055). The AI system had an AUC higher than 61.4% of the radiologists.</p><p>CONCLUSIONS: The evaluated AI system achieved a cancer detection accuracy comparable to an average breast radiologist in this retrospective setting. Although promising, the performance and impact of such a system in a screening setting needs further investigation.</p>}},
  author       = {{Rodriguez-Ruiz, Alejandro and Lång, Kristina and Gubern-Merida, Albert and Broeders, Mireille and Gennaro, Gisella and Clauser, Paola and Helbich, Thomas H and Chevalier, Margarita and Tan, Tao and Mertelmeier, Thomas and Wallis, Matthew G and Andersson, Ingvar and Zackrisson, Sophia and Mann, Ritse M and Sechopoulos, Ioannis}},
  issn         = {{1460-2105}},
  language     = {{eng}},
  month        = {{03}},
  number       = {{9}},
  pages        = {{916--922}},
  publisher    = {{Oxford University Press}},
  series       = {{Journal of the National Cancer Institute}},
  title        = {{Stand-Alone Artificial Intelligence for Breast Cancer Detection in Mammography : Comparison With 101 Radiologists}},
  url          = {{http://dx.doi.org/10.1093/jnci/djy222}},
  doi          = {{10.1093/jnci/djy222}},
  volume       = {{111}},
  year         = {{2019}},
}