Skip to main content

Lund University Publications

LUND UNIVERSITY LIBRARIES

Transductive Image Segmentation : Self-training and Effect of Uncertainty Estimation

Kamnitsas, Konstantinos ; Winzeck, Stefan ; Kornaropoulos, Evgenios N. LU ; Whitehouse, Daniel ; Englman, Cameron ; Phyu, Poe ; Pao, Norman ; Menon, David K. ; Rueckert, Daniel and Das, Tilak , et al. (2021) 3rd MICCAI Workshop on Domain Adaptation and Representation Transfer, DART 2021, and the 1st MICCAI Workshop on Affordable Healthcare and AI for Resource Diverse Global Health, FAIR 2021, held in conjunction with 24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021 In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) 12968 LNCS. p.79-89
Abstract

Semi-supervised learning (SSL) uses unlabeled data during training to learn better models. Previous studies on SSL for medical image segmentation focused mostly on improving model generalization to unseen data. In some applications, however, our primary interest is not generalization but to obtain optimal predictions on a specific unlabeled database that is fully available during model development. Examples include population studies for extracting imaging phenotypes. This work investigates an often overlooked aspect of SSL, transduction. It focuses on the quality of predictions made on the unlabeled data of interest when they are included for optimization during training, rather than improving generalization. We focus on the... (More)

Semi-supervised learning (SSL) uses unlabeled data during training to learn better models. Previous studies on SSL for medical image segmentation focused mostly on improving model generalization to unseen data. In some applications, however, our primary interest is not generalization but to obtain optimal predictions on a specific unlabeled database that is fully available during model development. Examples include population studies for extracting imaging phenotypes. This work investigates an often overlooked aspect of SSL, transduction. It focuses on the quality of predictions made on the unlabeled data of interest when they are included for optimization during training, rather than improving generalization. We focus on the self-training framework and explore its potential for transduction. We analyze it through the lens of Information Gain and reveal that learning benefits from the use of calibrated or under-confident models. Our extensive experiments on a large MRI database for multi-class segmentation of traumatic brain lesions shows promising results when comparing transductive with inductive predictions. We believe this study will inspire further research on transductive learning, a well-suited paradigm for medical image analysis.

(Less)
Please use this url to cite or link to this publication:
author
; ; ; ; ; ; ; ; and , et al. (More)
; ; ; ; ; ; ; ; ; ; and (Less)
organization
publishing date
type
Chapter in Book/Report/Conference proceeding
publication status
published
subject
host publication
Domain Adaptation and Representation Transfer, and Affordable Healthcare and AI for Resource Diverse Global Health - 3rd MICCAI Workshop, DART 2021, and 1st MICCAI Workshop, FAIR 2021, Held in Conjunction with MICCAI 2021, Proceedings
series title
Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)
editor
Albarqouni, Shadi ; Cardoso, M. Jorge ; Dou, Qi ; Kamnitsas, Konstantinos ; Khanal, Bishesh ; Rekik, Islem ; Rieke, Nicola ; Sheet, Debdoot ; Tsaftaris, Sotirios ; Xu, Daguang and Xu, Ziyue
volume
12968 LNCS
pages
11 pages
publisher
Springer Science and Business Media B.V.
conference name
3rd MICCAI Workshop on Domain Adaptation and Representation Transfer, DART 2021, and the 1st MICCAI Workshop on Affordable Healthcare and AI for Resource Diverse Global Health, FAIR 2021, held in conjunction with 24th International Conference on Medical Image Computing and Computer Assisted Intervention, MICCAI 2021
conference location
Virtual, Online
conference dates
2021-09-27 - 2021-10-01
external identifiers
  • scopus:85116422173
ISSN
0302-9743
1611-3349
ISBN
9783030877217
DOI
10.1007/978-3-030-87722-4_8
language
English
LU publication?
yes
additional info
Publisher Copyright: © 2021, Springer Nature Switzerland AG.
id
a30cb5e1-7eea-4c3a-80aa-e53c5f0ab701
date added to LUP
2021-10-25 14:39:04
date last changed
2024-03-23 12:20:35
@inproceedings{a30cb5e1-7eea-4c3a-80aa-e53c5f0ab701,
  abstract     = {{<p>Semi-supervised learning (SSL) uses unlabeled data during training to learn better models. Previous studies on SSL for medical image segmentation focused mostly on improving model generalization to unseen data. In some applications, however, our primary interest is not generalization but to obtain optimal predictions on a specific unlabeled database that is fully available during model development. Examples include population studies for extracting imaging phenotypes. This work investigates an often overlooked aspect of SSL, transduction. It focuses on the quality of predictions made on the unlabeled data of interest when they are included for optimization during training, rather than improving generalization. We focus on the self-training framework and explore its potential for transduction. We analyze it through the lens of Information Gain and reveal that learning benefits from the use of calibrated or under-confident models. Our extensive experiments on a large MRI database for multi-class segmentation of traumatic brain lesions shows promising results when comparing transductive with inductive predictions. We believe this study will inspire further research on transductive learning, a well-suited paradigm for medical image analysis.</p>}},
  author       = {{Kamnitsas, Konstantinos and Winzeck, Stefan and Kornaropoulos, Evgenios N. and Whitehouse, Daniel and Englman, Cameron and Phyu, Poe and Pao, Norman and Menon, David K. and Rueckert, Daniel and Das, Tilak and Newcombe, Virginia F.J. and Glocker, Ben}},
  booktitle    = {{Domain Adaptation and Representation Transfer, and Affordable Healthcare and AI for Resource Diverse Global Health - 3rd MICCAI Workshop, DART 2021, and 1st MICCAI Workshop, FAIR 2021, Held in Conjunction with MICCAI 2021, Proceedings}},
  editor       = {{Albarqouni, Shadi and Cardoso, M. Jorge and Dou, Qi and Kamnitsas, Konstantinos and Khanal, Bishesh and Rekik, Islem and Rieke, Nicola and Sheet, Debdoot and Tsaftaris, Sotirios and Xu, Daguang and Xu, Ziyue}},
  isbn         = {{9783030877217}},
  issn         = {{0302-9743}},
  language     = {{eng}},
  pages        = {{79--89}},
  publisher    = {{Springer Science and Business Media B.V.}},
  series       = {{Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)}},
  title        = {{Transductive Image Segmentation : Self-training and Effect of Uncertainty Estimation}},
  url          = {{http://dx.doi.org/10.1007/978-3-030-87722-4_8}},
  doi          = {{10.1007/978-3-030-87722-4_8}},
  volume       = {{12968 LNCS}},
  year         = {{2021}},
}