Modality-aware Mutual Learning for Multi-modal Medical Image
Segmentation
- URL: http://arxiv.org/abs/2107.09842v1
- Date: Wed, 21 Jul 2021 02:24:31 GMT
- Title: Modality-aware Mutual Learning for Multi-modal Medical Image
Segmentation
- Authors: Yao Zhang, Jiawei Yang, Jiang Tian, Zhongchao Shi, Cheng Zhong, Yang
Zhang, and Zhiqiang He
- Abstract summary: Liver cancer is one of the most common cancers worldwide.
In this paper, we focus on improving automated liver tumor segmentation by integrating multi-modal CT images.
We propose a novel mutual learning (ML) strategy for effective and robust liver tumor segmentation.
- Score: 12.308579499188921
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Liver cancer is one of the most common cancers worldwide. Due to
inconspicuous texture changes of liver tumor, contrast-enhanced computed
tomography (CT) imaging is effective for the diagnosis of liver cancer. In this
paper, we focus on improving automated liver tumor segmentation by integrating
multi-modal CT images. To this end, we propose a novel mutual learning (ML)
strategy for effective and robust multi-modal liver tumor segmentation.
Different from existing multi-modal methods that fuse information from
different modalities by a single model, with ML, an ensemble of
modality-specific models learn collaboratively and teach each other to distill
both the characteristics and the commonality between high-level representations
of different modalities. The proposed ML not only enables the superiority for
multi-modal learning but can also handle missing modalities by transferring
knowledge from existing modalities to missing ones. Additionally, we present a
modality-aware (MA) module, where the modality-specific models are
interconnected and calibrated with attention weights for adaptive information
exchange. The proposed modality-aware mutual learning (MAML) method achieves
promising results for liver tumor segmentation on a large-scale clinical
dataset. Moreover, we show the efficacy and robustness of MAML for handling
missing modalities on both the liver tumor and public brain tumor (BRATS 2018)
datasets. Our code is available at https://github.com/YaoZhang93/MAML.
Related papers
- multiPI-TransBTS: A Multi-Path Learning Framework for Brain Tumor Image Segmentation Based on Multi-Physical Information [1.7359724605901228]
Brain Tumor distances (BraTS) plays a critical role in clinical diagnosis, treatment planning, and monitoring the progression of brain tumors.
Due to the variability in tumor appearance, size, and intensity across different MRI modalities, automated segmentation remains a challenging task.
We propose a novel Transformer-based framework, multiPI-TransBTS, which integrates multi-physical information to enhance segmentation accuracy.
arXiv Detail & Related papers (2024-09-18T17:35:19Z) - Mask-Enhanced Segment Anything Model for Tumor Lesion Semantic Segmentation [48.107348956719775]
We introduce Mask-Enhanced SAM (M-SAM), an innovative architecture tailored for 3D tumor lesion segmentation.
We propose a novel Mask-Enhanced Adapter (MEA) within M-SAM that enriches the semantic information of medical images with positional data from coarse segmentation masks.
Our M-SAM achieves high segmentation accuracy and also exhibits robust generalization.
arXiv Detail & Related papers (2024-03-09T13:37:02Z) - Modality-Aware and Shift Mixer for Multi-modal Brain Tumor Segmentation [12.094890186803958]
We present a novel Modality Aware and Shift Mixer that integrates intra-modality and inter-modality dependencies of multi-modal images for effective and robust brain tumor segmentation.
Specifically, we introduce a Modality-Aware module according to neuroimaging studies for modeling the specific modality pair relationships at low levels, and a Modality-Shift module with specific mosaic patterns is developed to explore the complex relationships across modalities at high levels via the self-attention.
arXiv Detail & Related papers (2024-03-04T14:21:51Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - United adversarial learning for liver tumor segmentation and detection
of multi-modality non-contrast MRI [5.857654010519764]
We propose a united adversarial learning framework (UAL) for simultaneous liver tumors segmentation and detection using multi-modality NCMRI.
The UAL first utilizes a multi-view aware encoder to extract multi-modality NCMRI information for liver tumor segmentation and detection.
The proposed mechanism of coordinate sharing with padding integrates the multi-task of segmentation and detection so that it enables multi-task to perform united adversarial learning in one discriminator.
arXiv Detail & Related papers (2022-01-07T18:54:07Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - A Multi-Stage Attentive Transfer Learning Framework for Improving
COVID-19 Diagnosis [49.3704402041314]
We propose a multi-stage attentive transfer learning framework for improving COVID-19 diagnosis.
Our proposed framework consists of three stages to train accurate diagnosis models through learning knowledge from multiple source tasks and data of different domains.
Importantly, we propose a novel self-supervised learning method to learn multi-scale representations for lung CT images.
arXiv Detail & Related papers (2021-01-14T01:39:19Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data [2.2515303891664358]
Tumor segmentation in multimodal medical images has seen a growing trend towards deep learning based methods.
We propose a simultaneous co-segmentation method, which enables multimodal feature learning through modality-specific encoder and decoder branches.
We demonstrate the effectiveness of our approach on public soft tissue sarcoma data, which comprises MRI (T1 and T2 sequence) and PET/CT scans.
arXiv Detail & Related papers (2020-08-28T09:15:42Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.