Toward Unpaired Multi-modal Medical Image Segmentation via Learning
Structured Semantic Consistency
- URL: http://arxiv.org/abs/2206.10571v3
- Date: Sun, 30 Apr 2023 07:18:44 GMT
- Title: Toward Unpaired Multi-modal Medical Image Segmentation via Learning
Structured Semantic Consistency
- Authors: Jie Yang, Ye Zhu, Chaoqun Wang, Zhen Li, Ruimao Zhang
- Abstract summary: This paper presents a novel scheme to learn the mutual benefits of different modalities to achieve better segmentation results for unpaired medical images.
We leverage a carefully designed External Attention Module (EAM) to align semantic class representations and their correlations of different modalities.
We have demonstrated the effectiveness of the proposed method on two medical image segmentation scenarios.
- Score: 24.78258331561847
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Integrating multi-modal data to promote medical image analysis has recently
gained great attention. This paper presents a novel scheme to learn the mutual
benefits of different modalities to achieve better segmentation results for
unpaired multi-modal medical images. Our approach tackles two critical issues
of this task from a practical perspective: (1) how to effectively learn the
semantic consistencies of various modalities (e.g., CT and MRI), and (2) how to
leverage the above consistencies to regularize the network learning while
preserving its simplicity. To address (1), we leverage a carefully designed
External Attention Module (EAM) to align semantic class representations and
their correlations of different modalities. To solve (2), the proposed EAM is
designed as an external plug-and-play one, which can be discarded once the
model is optimized. We have demonstrated the effectiveness of the proposed
method on two medical image segmentation scenarios: (1) cardiac structure
segmentation, and (2) abdominal multi-organ segmentation. Extensive results
show that the proposed method outperforms its counterparts by a wide margin.
Related papers
- PMT: Progressive Mean Teacher via Exploring Temporal Consistency for Semi-Supervised Medical Image Segmentation [51.509573838103854]
We propose a semi-supervised learning framework, termed Progressive Mean Teachers (PMT), for medical image segmentation.
Our PMT generates high-fidelity pseudo labels by learning robust and diverse features in the training process.
Experimental results on two datasets with different modalities, i.e., CT and MRI, demonstrate that our method outperforms the state-of-the-art medical image segmentation approaches.
arXiv Detail & Related papers (2024-09-08T15:02:25Z) - Robust Semi-supervised Multimodal Medical Image Segmentation via Cross Modality Collaboration [21.97457095780378]
We propose a novel semi-supervised multimodal segmentation framework that is robust to scarce labeled data and misaligned modalities.
Our framework employs a novel cross modality collaboration strategy to distill modality-independent knowledge, which is inherently associated with each modality.
It also integrates contrastive consistent learning to regulate anatomical structures, facilitating anatomical-wise prediction alignment on unlabeled data.
arXiv Detail & Related papers (2024-08-14T07:34:12Z) - Cross-model Mutual Learning for Exemplar-based Medical Image Segmentation [25.874281336821685]
Cross-model Mutual learning framework for Exemplar-based Medical image (CMEMS)
We introduce a novel Cross-model Mutual learning framework for Exemplar-based Medical image (CMEMS)
arXiv Detail & Related papers (2024-04-18T00:18:07Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - Self-Sampling Meta SAM: Enhancing Few-shot Medical Image Segmentation
with Meta-Learning [17.386754270460273]
We present a Self-Sampling Meta SAM framework for few-shot medical image segmentation.
The proposed method achieves significant improvements over state-of-the-art methods in few-shot segmentation.
In conclusion, we present a novel approach for rapid online adaptation in interactive image segmentation, adapting to a new organ in just 0.83 minutes.
arXiv Detail & Related papers (2023-08-31T05:20:48Z) - A Multi-View Dynamic Fusion Framework: How to Improve the Multimodal
Brain Tumor Segmentation from Multi-Views? [5.793853101758628]
This paper proposes a multi-view dynamic fusion framework to improve the performance of brain tumor segmentation.
By evaluating the proposed framework on BRATS 2015 and BRATS 2018, it can be found that the fusion results from multi-views achieve a better performance than the segmentation result from the single view.
arXiv Detail & Related papers (2020-12-21T09:45:23Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Unsupervised Bidirectional Cross-Modality Adaptation via Deeply
Synergistic Image and Feature Alignment for Medical Image Segmentation [73.84166499988443]
We present a novel unsupervised domain adaptation framework, named as Synergistic Image and Feature Alignment (SIFA)
Our proposed SIFA conducts synergistic alignment of domains from both image and feature perspectives.
Experimental results on two different tasks demonstrate that our SIFA method is effective in improving segmentation performance on unlabeled target images.
arXiv Detail & Related papers (2020-02-06T13:49:47Z) - Unpaired Multi-modal Segmentation via Knowledge Distillation [77.39798870702174]
We propose a novel learning scheme for unpaired cross-modality image segmentation.
In our method, we heavily reuse network parameters, by sharing all convolutional kernels across CT and MRI.
We have extensively validated our approach on two multi-class segmentation problems.
arXiv Detail & Related papers (2020-01-06T20:03:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.