Modality-Agnostic Learning for Medical Image Segmentation Using
Multi-modality Self-distillation
- URL: http://arxiv.org/abs/2306.03730v1
- Date: Tue, 6 Jun 2023 14:48:50 GMT
- Title: Modality-Agnostic Learning for Medical Image Segmentation Using
Multi-modality Self-distillation
- Authors: Qisheng He, Nicholas Summerfield, Ming Dong, Carri Glide-Hurst
- Abstract summary: We propose a novel framework, Modality-Agnostic learning through Multi-modality Self-dist-illation (MAG-MS)
MAG-MS distills knowledge from the fusion of multiple modalities and applies it to enhance representation learning for individual modalities.
Our experiments on benchmark datasets demonstrate the high efficiency of MAG-MS and its superior segmentation performance.
- Score: 1.815047691981538
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Medical image segmentation of tumors and organs at risk is a time-consuming
yet critical process in the clinic that utilizes multi-modality imaging (e.g,
different acquisitions, data types, and sequences) to increase segmentation
precision. In this paper, we propose a novel framework, Modality-Agnostic
learning through Multi-modality Self-dist-illation (MAG-MS), to investigate the
impact of input modalities on medical image segmentation. MAG-MS distills
knowledge from the fusion of multiple modalities and applies it to enhance
representation learning for individual modalities. Thus, it provides a
versatile and efficient approach to handle limited modalities during testing.
Our extensive experiments on benchmark datasets demonstrate the high efficiency
of MAG-MS and its superior segmentation performance than current
state-of-the-art methods. Furthermore, using MAG-MS, we provide valuable
insight and guidance on selecting input modalities for medical image
segmentation tasks.
Related papers
- Multi-sensor Learning Enables Information Transfer across Different Sensory Data and Augments Multi-modality Imaging [21.769547352111957]
We investigate a data-driven multi-modality imaging (DMI) strategy for synergetic imaging of CT and MRI.
We reveal two distinct types of features in multi-modality imaging, namely intra- and inter-modality features, and present a multi-sensor learning (MSL) framework.
We showcase the effectiveness of our DMI strategy through synergetic CT-MRI brain imaging.
arXiv Detail & Related papers (2024-09-28T17:40:54Z) - MIST: A Simple and Scalable End-To-End 3D Medical Imaging Segmentation Framework [1.4043931310479378]
The Medical Imaging Toolkit (MIST) is designed to facilitate consistent training, testing, and evaluation of deep learning-based medical imaging segmentation methods.
MIST standardizes data analysis, preprocessing, and evaluation pipelines, accommodating multiple architectures and loss functions.
arXiv Detail & Related papers (2024-07-31T05:17:31Z) - Modality-Aware and Shift Mixer for Multi-modal Brain Tumor Segmentation [12.094890186803958]
We present a novel Modality Aware and Shift Mixer that integrates intra-modality and inter-modality dependencies of multi-modal images for effective and robust brain tumor segmentation.
Specifically, we introduce a Modality-Aware module according to neuroimaging studies for modeling the specific modality pair relationships at low levels, and a Modality-Shift module with specific mosaic patterns is developed to explore the complex relationships across modalities at high levels via the self-attention.
arXiv Detail & Related papers (2024-03-04T14:21:51Z) - Self-Supervised Neuron Segmentation with Multi-Agent Reinforcement
Learning [53.00683059396803]
Mask image model (MIM) has been widely used due to its simplicity and effectiveness in recovering original information from masked images.
We propose a decision-based MIM that utilizes reinforcement learning (RL) to automatically search for optimal image masking ratio and masking strategy.
Our approach has a significant advantage over alternative self-supervised methods on the task of neuron segmentation.
arXiv Detail & Related papers (2023-10-06T10:40:46Z) - Multi-Modal Evaluation Approach for Medical Image Segmentation [4.989480853499916]
We propose a novel multi-modal evaluation (MME) approach to measure the effectiveness of different segmentation methods.
We introduce new relevant and interpretable characteristics, including detection property, boundary alignment, uniformity, total volume, and relative volume.
Our proposed approach is open-source and publicly available for use.
arXiv Detail & Related papers (2023-02-08T15:31:33Z) - Uncertainty-Aware Multi-Parametric Magnetic Resonance Image Information
Fusion for 3D Object Segmentation [12.361668672097753]
We propose an uncertainty-aware multi-parametric MR image feature fusion method to fully exploit the information for enhanced 3D image segmentation.
Our proposed method achieves better segmentation performance when compared to existing models.
arXiv Detail & Related papers (2022-11-16T09:16:52Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Cross-Modal Information Maximization for Medical Imaging: CMIM [62.28852442561818]
In hospitals, data are siloed to specific information systems that make the same information available under different modalities.
This offers unique opportunities to obtain and use at train-time those multiple views of the same information that might not always be available at test-time.
We propose an innovative framework that makes the most of available data by learning good representations of a multi-modal input that are resilient to modality dropping at test-time.
arXiv Detail & Related papers (2020-10-20T20:05:35Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.