Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Generalizable deep learning framework for 3D medical image segmentation using limited training data

Ekman, Tobias LU ; Barakat, Arthur LU and Heiberg, Einar LU orcid (2025) In 3D Printing in Medicine 11(1).
Abstract

Medical image segmentation is a critical component in a wide range of clinical applications, enabling the identification and delineation of anatomical structures. This study focuses on segmentation of anatomical structures for 3D printing, virtual surgery planning, and advanced visualization such as virtual or augmented reality. Manual segmentation methods are labor-intensive and can be subjective, leading to inter-observer variability. Machine learning algorithms, particularly deep learning models, have gained traction for automating the process and are now considered state-of-the-art. However, deep-learning methods typically demand large datasets for fine-tuning and powerful graphics cards, limiting their applicability in... (More)

Medical image segmentation is a critical component in a wide range of clinical applications, enabling the identification and delineation of anatomical structures. This study focuses on segmentation of anatomical structures for 3D printing, virtual surgery planning, and advanced visualization such as virtual or augmented reality. Manual segmentation methods are labor-intensive and can be subjective, leading to inter-observer variability. Machine learning algorithms, particularly deep learning models, have gained traction for automating the process and are now considered state-of-the-art. However, deep-learning methods typically demand large datasets for fine-tuning and powerful graphics cards, limiting their applicability in resource-constrained settings. In this paper we introduce a robust deep learning framework for 3D medical segmentation that achieves high performance across a range of medical segmentation tasks, even when trained on a small number of subjects. This approach overcomes the need for extensive data and heavy GPU resources, facilitating adoption within healthcare systems. The potential is exemplified through six different clinical applications involving orthopedics, orbital segmentation, mandible CT, cardiac CT, fetal MRI and lung CT. Notably, a small set of hyper-parameters and augmentation settings produced segmentations with an average Dice score of 92% (SD = ±0.06) across a diverse range of organs and tissues.

(Less)
Please use this url to cite or link to this publication:
author
; and
organization
publishing date
type
Contribution to journal
publication status
published
subject
keywords
3D printing, Artificial intelligence, Deep learning, Machine learning, Segmentation
in
3D Printing in Medicine
volume
11
issue
1
article number
9
publisher
BioMed Central (BMC)
external identifiers
  • pmid:40045095
  • scopus:86000060803
DOI
10.1186/s41205-025-00254-1
language
English
LU publication?
yes
id
232d3d9f-cc62-48e9-bdc1-b1004260a974
date added to LUP
2025-06-09 11:21:24
date last changed
2025-07-07 14:15:42
@article{232d3d9f-cc62-48e9-bdc1-b1004260a974,
  abstract     = {{<p>Medical image segmentation is a critical component in a wide range of clinical applications, enabling the identification and delineation of anatomical structures. This study focuses on segmentation of anatomical structures for 3D printing, virtual surgery planning, and advanced visualization such as virtual or augmented reality. Manual segmentation methods are labor-intensive and can be subjective, leading to inter-observer variability. Machine learning algorithms, particularly deep learning models, have gained traction for automating the process and are now considered state-of-the-art. However, deep-learning methods typically demand large datasets for fine-tuning and powerful graphics cards, limiting their applicability in resource-constrained settings. In this paper we introduce a robust deep learning framework for 3D medical segmentation that achieves high performance across a range of medical segmentation tasks, even when trained on a small number of subjects. This approach overcomes the need for extensive data and heavy GPU resources, facilitating adoption within healthcare systems. The potential is exemplified through six different clinical applications involving orthopedics, orbital segmentation, mandible CT, cardiac CT, fetal MRI and lung CT. Notably, a small set of hyper-parameters and augmentation settings produced segmentations with an average Dice score of 92% (SD = ±0.06) across a diverse range of organs and tissues.</p>}},
  author       = {{Ekman, Tobias and Barakat, Arthur and Heiberg, Einar}},
  keywords     = {{3D printing; Artificial intelligence; Deep learning; Machine learning; Segmentation}},
  language     = {{eng}},
  number       = {{1}},
  publisher    = {{BioMed Central (BMC)}},
  series       = {{3D Printing in Medicine}},
  title        = {{Generalizable deep learning framework for 3D medical image segmentation using limited training data}},
  url          = {{http://dx.doi.org/10.1186/s41205-025-00254-1}},
  doi          = {{10.1186/s41205-025-00254-1}},
  volume       = {{11}},
  year         = {{2025}},
}