Automatic Segmentation of Medial Temporal Lobe Subregions in Multi-Scanner, Multi-Modality Magnetic Resonance Imaging of Variable Quality
(2025) In Hippocampus 35(6).- Abstract
Volumetry of subregions in the medial temporal lobe (MTL) computed from automatic segmentation in MRI can track neurodegeneration in Alzheimer's disease. However, poor quality MR images can lead to unreliable segmentation of MTL subregions. Considering that different MRI contrast mechanisms and field strengths (jointly referred to as “modalities” here) offer distinct advantages in imaging different parts of the MTL, we developed a multi-modality segmentation model using both 7T and 3T structural MRI to obtain robust segmentation in poor-quality images. MRI modalities including 3T T1-weighted, 3T T2-weighted, 7T T1-weighted and 7T T2-weighted (7T-T2w) of 197 participants were collected from a longitudinal aging study at the Penn... (More)
Volumetry of subregions in the medial temporal lobe (MTL) computed from automatic segmentation in MRI can track neurodegeneration in Alzheimer's disease. However, poor quality MR images can lead to unreliable segmentation of MTL subregions. Considering that different MRI contrast mechanisms and field strengths (jointly referred to as “modalities” here) offer distinct advantages in imaging different parts of the MTL, we developed a multi-modality segmentation model using both 7T and 3T structural MRI to obtain robust segmentation in poor-quality images. MRI modalities including 3T T1-weighted, 3T T2-weighted, 7T T1-weighted and 7T T2-weighted (7T-T2w) of 197 participants were collected from a longitudinal aging study at the Penn Alzheimer's Disease Research Center. Among them, 7T-T2w was used as the primary modality, and all other modalities were rigidly registered to the 7T-T2w. A model derived from nnU-Net took these registered modalities as input and outputted subregion segmentation in 7T-T2w space. 7T-T2w images most of which had high quality from 25 selected training participants were manually segmented to train the multi-modality model. Modality augmentation, which randomly replaced certain modalities with Gaussian noise, was applied during training to guide the model to extract information from all modalities. The multi-modality model delivered good performance regardless of 7T-T2w quality, while the single-modality model under-segmented subregions in poor-quality images. The multi-modality model generally demonstrated stronger discrimination of A + MCI versus A-CU. Intra-class correlation and Bland–Altman plots demonstrate that the multi-modality model had higher longitudinal segmentation consistency in all subregions while the single-modality model had low consistency in poor-quality images. The multi-modality MRI segmentation model provides an improved biomarker for neurodegeneration in the MTL that is robust to image quality. It also provides a framework for other studies which may benefit from multimodal imaging.
(Less)
- author
- organization
- publishing date
- 2025-11
- type
- Contribution to journal
- publication status
- published
- subject
- keywords
- medial temporal lobe, multi-modality, subregion segmentation
- in
- Hippocampus
- volume
- 35
- issue
- 6
- article number
- e70036
- publisher
- Wiley-Liss Inc.
- external identifiers
-
- pmid:41055255
- scopus:105017931395
- ISSN
- 1050-9631
- DOI
- 10.1002/hipo.70036
- language
- English
- LU publication?
- yes
- id
- 926578f2-1dfa-4f39-90ff-0be7b4cedee6
- date added to LUP
- 2025-11-24 14:02:17
- date last changed
- 2025-12-08 15:23:40
@article{926578f2-1dfa-4f39-90ff-0be7b4cedee6,
abstract = {{<p>Volumetry of subregions in the medial temporal lobe (MTL) computed from automatic segmentation in MRI can track neurodegeneration in Alzheimer's disease. However, poor quality MR images can lead to unreliable segmentation of MTL subregions. Considering that different MRI contrast mechanisms and field strengths (jointly referred to as “modalities” here) offer distinct advantages in imaging different parts of the MTL, we developed a multi-modality segmentation model using both 7T and 3T structural MRI to obtain robust segmentation in poor-quality images. MRI modalities including 3T T1-weighted, 3T T2-weighted, 7T T1-weighted and 7T T2-weighted (7T-T2w) of 197 participants were collected from a longitudinal aging study at the Penn Alzheimer's Disease Research Center. Among them, 7T-T2w was used as the primary modality, and all other modalities were rigidly registered to the 7T-T2w. A model derived from nnU-Net took these registered modalities as input and outputted subregion segmentation in 7T-T2w space. 7T-T2w images most of which had high quality from 25 selected training participants were manually segmented to train the multi-modality model. Modality augmentation, which randomly replaced certain modalities with Gaussian noise, was applied during training to guide the model to extract information from all modalities. The multi-modality model delivered good performance regardless of 7T-T2w quality, while the single-modality model under-segmented subregions in poor-quality images. The multi-modality model generally demonstrated stronger discrimination of A + MCI versus A-CU. Intra-class correlation and Bland–Altman plots demonstrate that the multi-modality model had higher longitudinal segmentation consistency in all subregions while the single-modality model had low consistency in poor-quality images. The multi-modality MRI segmentation model provides an improved biomarker for neurodegeneration in the MTL that is robust to image quality. It also provides a framework for other studies which may benefit from multimodal imaging.</p>}},
author = {{Li, Yue and Xie, Long and Khandelwal, Pulkit and Wisse, Laura E.M. and Brown, Christopher A. and Prabhakaran, Karthik and Tisdall, M. Dylan and Mechanic-Hamilton, Dawn and Detre, John A. and Das, Sandhitsu R. and Wolk, David A. and Yushkevich, Paul A.}},
issn = {{1050-9631}},
keywords = {{medial temporal lobe; multi-modality; subregion segmentation}},
language = {{eng}},
number = {{6}},
publisher = {{Wiley-Liss Inc.}},
series = {{Hippocampus}},
title = {{Automatic Segmentation of Medial Temporal Lobe Subregions in Multi-Scanner, Multi-Modality Magnetic Resonance Imaging of Variable Quality}},
url = {{http://dx.doi.org/10.1002/hipo.70036}},
doi = {{10.1002/hipo.70036}},
volume = {{35}},
year = {{2025}},
}
