Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation
- URL: http://arxiv.org/abs/2010.01532v1
- Date: Sun, 4 Oct 2020 10:25:13 GMT
- Title: Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation
- Authors: Kang Li, Lequan Yu, Shujun Wang and Pheng-Ann Heng
- Abstract summary: In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
- Score: 71.89867233426597
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The success of deep convolutional neural networks is partially attributed to
the massive amount of annotated training data. However, in practice, medical
data annotations are usually expensive and time-consuming to be obtained.
Considering multi-modality data with the same anatomic structures are widely
available in clinic routine, in this paper, we aim to exploit the prior
knowledge (e.g., shape priors) learned from one modality (aka., assistant
modality) to improve the segmentation performance on another modality (aka.,
target modality) to make up annotation scarcity. To alleviate the learning
difficulties caused by modality-specific appearance discrepancy, we first
present an Image Alignment Module (IAM) to narrow the appearance gap between
assistant and target modality data.We then propose a novel Mutual Knowledge
Distillation (MKD) scheme to thoroughly exploit the modality-shared knowledge
to facilitate the target-modality segmentation. To be specific, we formulate
our framework as an integration of two individual segmentors. Each segmentor
not only explicitly extracts one modality knowledge from corresponding
annotations, but also implicitly explores another modality knowledge from its
counterpart in mutual-guided manner. The ensemble of two segmentors would
further integrate the knowledge from both modalities and generate reliable
segmentation results on target modality. Experimental results on the public
multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method
achieves large improvements on CT segmentation by utilizing additional MRI data
and outperforms other state-of-the-art multi-modality learning methods.
Related papers
- DRIM: Learning Disentangled Representations from Incomplete Multimodal Healthcare Data [0.0]
Real-life medical data is often multimodal and incomplete, fueling the need for advanced deep learning models.
We introduce DRIM, a new method for capturing shared and unique representations, despite data sparsity.
Our method outperforms state-of-the-art algorithms on glioma patients survival prediction tasks, while being robust to missing modalities.
arXiv Detail & Related papers (2024-09-25T16:13:57Z) - Robust Semi-supervised Multimodal Medical Image Segmentation via Cross Modality Collaboration [21.97457095780378]
We propose a novel semi-supervised multimodal segmentation framework that is robust to scarce labeled data and misaligned modalities.
Our framework employs a novel cross modality collaboration strategy to distill modality-independent knowledge, which is inherently associated with each modality.
It also integrates contrastive consistent learning to regulate anatomical structures, facilitating anatomical-wise prediction alignment on unlabeled data.
arXiv Detail & Related papers (2024-08-14T07:34:12Z) - Learnable Weight Initialization for Volumetric Medical Image Segmentation [66.3030435676252]
We propose a learnable weight-based hybrid medical image segmentation approach.
Our approach is easy to integrate into any hybrid model and requires no external training data.
Experiments on multi-organ and lung cancer segmentation tasks demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-06-15T17:55:05Z) - Toward Unpaired Multi-modal Medical Image Segmentation via Learning
Structured Semantic Consistency [24.78258331561847]
This paper presents a novel scheme to learn the mutual benefits of different modalities to achieve better segmentation results for unpaired medical images.
We leverage a carefully designed External Attention Module (EAM) to align semantic class representations and their correlations of different modalities.
We have demonstrated the effectiveness of the proposed method on two medical image segmentation scenarios.
arXiv Detail & Related papers (2022-06-21T17:50:29Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Dual-Task Mutual Learning for Semi-Supervised Medical Image Segmentation [12.940103904327655]
We propose a novel dual-task mutual learning framework for semi-supervised medical image segmentation.
Our framework can be formulated as an integration of two individual segmentation networks based on two tasks.
By jointly learning the segmentation probability maps and signed distance maps of targets, our framework can enforce the geometric shape constraint and learn more reliable information.
arXiv Detail & Related papers (2021-03-08T12:38:23Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.