DC-Seg: Disentangled Contrastive Learning for Brain Tumor Segmentation with Missing Modalities
- URL: http://arxiv.org/abs/2505.11921v1
- Date: Sat, 17 May 2025 09:12:08 GMT
- Title: DC-Seg: Disentangled Contrastive Learning for Brain Tumor Segmentation with Missing Modalities
- Authors: Haitao Li, Ziyu Li, Yiheng Mao, Zhengyao Ding, Zhengxing Huang,
- Abstract summary: We propose DC-Seg, a new method that explicitly disentangles images into modality-invariant anatomical representation.<n>This solution improves the separation of anatomical and modality-specific features by considering the modality gaps.<n>Experiments on the BraTS 2020 and a private white matter hyperintensity(WMH) segmentation dataset demonstrate that DC-Seg outperforms state-of-the-art methods.
- Score: 9.878680177357454
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Accurate segmentation of brain images typically requires the integration of complementary information from multiple image modalities. However, clinical data for all modalities may not be available for every patient, creating a significant challenge. To address this, previous studies encode multiple modalities into a shared latent space. While somewhat effective, it remains suboptimal, as each modality contains distinct and valuable information. In this study, we propose DC-Seg (Disentangled Contrastive Learning for Segmentation), a new method that explicitly disentangles images into modality-invariant anatomical representation and modality-specific representation, by using anatomical contrastive learning and modality contrastive learning respectively. This solution improves the separation of anatomical and modality-specific features by considering the modality gaps, leading to more robust representations. Furthermore, we introduce a segmentation-based regularizer that enhances the model's robustness to missing modalities. Extensive experiments on the BraTS 2020 and a private white matter hyperintensity(WMH) segmentation dataset demonstrate that DC-Seg outperforms state-of-the-art methods in handling incomplete multimodal brain tumor segmentation tasks with varying missing modalities, while also demonstrate strong generalizability in WMH segmentation. The code is available at https://github.com/CuCl-2/DC-Seg.
Related papers
- Hypergraph Tversky-Aware Domain Incremental Learning for Brain Tumor Segmentation with Missing Modalities [9.429176881328274]
In clinical practice, some MRI modalities may be missing due to the sequential nature of MRI acquisition.<n>We propose Replay-based Hypergraph Domain Incremental Learning (ReHyDIL) for brain tumor segmentation with missing modalities.
arXiv Detail & Related papers (2025-05-22T15:49:25Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Enhancing Modality-Agnostic Representations via Meta-Learning for Brain
Tumor Segmentation [16.747365311040863]
We propose a novel approach to learn enhanced modality-agnostic representations by employing a meta-learning strategy in training.
Our framework significantly outperforms state-of-the-art brain tumor segmentation techniques in missing modality scenarios.
arXiv Detail & Related papers (2023-02-08T19:53:07Z) - Exploiting Partial Common Information Microstructure for Multi-Modal
Brain Tumor Segmentation [11.583406152227637]
Learning with multiple modalities is crucial for automated brain tumor segmentation from magnetic resonance imaging data.
Existing approaches are oblivious to partial common information shared by subsets of the modalities.
In this paper, we show that identifying such partial common information can significantly boost the discriminative power of image segmentation models.
arXiv Detail & Related papers (2023-02-06T01:28:52Z) - M-GenSeg: Domain Adaptation For Target Modality Tumor Segmentation With
Annotation-Efficient Supervision [4.023899199756184]
M-GenSeg is a new semi-supervised generative training strategy for cross-modality tumor segmentation.
We evaluate the performance on a brain tumor segmentation dataset composed of four different contrast sequences.
Unlike the prior art, M-GenSeg also introduces the ability to train with a partially annotated source modality.
arXiv Detail & Related papers (2022-12-14T15:19:06Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Cross-Domain Segmentation with Adversarial Loss and Covariate Shift for
Biomedical Imaging [2.1204495827342438]
This manuscript aims to implement a novel model that can learn robust representations from cross-domain data by encapsulating distinct and shared patterns from different modalities.
The tests on CT and MRI liver data acquired in routine clinical trials show that the proposed model outperforms all other baseline with a large margin.
arXiv Detail & Related papers (2020-06-08T07:35:55Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.