ACN: Adversarial Co-training Network for Brain Tumor Segmentation with
Missing Modalities
- URL: http://arxiv.org/abs/2106.14591v2
- Date: Tue, 29 Jun 2021 08:08:11 GMT
- Title: ACN: Adversarial Co-training Network for Brain Tumor Segmentation with
Missing Modalities
- Authors: Yixin Wang, Yang Zhang, Yang Liu, Zihao Lin, Jiang Tian, Cheng Zhong,
Zhongchao Shi, Jianping Fan, Zhiqiang He
- Abstract summary: We propose a novel Adversarial Co-training Network (ACN) to solve this issue.
ACN enables a coupled learning process for both full modality and missing modality to supplement each other's domain.
Our proposed method significantly outperforms all state-of-the-art methods under any missing situation.
- Score: 26.394130795896704
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate segmentation of brain tumors from magnetic resonance imaging (MRI)
is clinically relevant in diagnoses, prognoses and surgery treatment, which
requires multiple modalities to provide complementary morphological and
physiopathologic information. However, missing modality commonly occurs due to
image corruption, artifacts, different acquisition protocols or allergies to
certain contrast agents in clinical practice. Though existing efforts
demonstrate the possibility of a unified model for all missing situations, most
of them perform poorly when more than one modality is missing. In this paper,
we propose a novel Adversarial Co-training Network (ACN) to solve this issue,
in which a series of independent yet related models are trained dedicated to
each missing situation with significantly better results. Specifically, ACN
adopts a novel co-training network, which enables a coupled learning process
for both full modality and missing modality to supplement each other's domain
and feature representations, and more importantly, to recover the `missing'
information of absent modalities. Then, two unsupervised modules, i.e., entropy
and knowledge adversarial learning modules are proposed to minimize the domain
gap while enhancing prediction reliability and encouraging the alignment of
latent representations, respectively. We also adapt modality-mutual information
knowledge transfer learning to ACN to retain the rich mutual information among
modalities. Extensive experiments on BraTS2018 dataset show that our proposed
method significantly outperforms all state-of-the-art methods under any missing
situation.
Related papers
- Robust Divergence Learning for Missing-Modality Segmentation [6.144772447916824]
Multimodal Magnetic Resonance Imaging (MRI) provides essential complementary information for analyzing brain tumor subregions.
While methods using four common MRI modalities for automatic segmentation have shown success, they often face challenges with missing modalities due to image quality issues, inconsistent protocols, allergic reactions, or cost factors.
A novel single-modality parallel processing network framework based on H"older divergence and mutual information is introduced.
arXiv Detail & Related papers (2024-11-13T03:03:30Z) - UNICORN: A Deep Learning Model for Integrating Multi-Stain Data in Histopathology [2.9389205138207277]
UNICORN is a multi-modal transformer capable of processing multi-stain histopathology for atherosclerosis severity class prediction.
The architecture comprises a two-stage, end-to-end trainable model with specialized modules utilizing transformer self-attention blocks.
UNICORN achieved a classification accuracy of 0.67, outperforming other state-of-the-art models.
arXiv Detail & Related papers (2024-09-26T12:13:52Z) - MedMAP: Promoting Incomplete Multi-modal Brain Tumor Segmentation with Alignment [20.358300924109162]
In clinical practice, certain modalities of MRI may be missing, which presents a more difficult scenario.
Knowledge Distillation, Domain Adaption, and Shared Latent Space have emerged as commonly promising strategies.
We propose a novel paradigm that aligns latent features of involved modalities to a well-defined distribution anchor as the substitution of the pre-trained model.
arXiv Detail & Related papers (2024-08-18T13:16:30Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Incomplete Multimodal Learning for Complex Brain Disorders Prediction [65.95783479249745]
We propose a new incomplete multimodal data integration approach that employs transformers and generative adversarial networks.
We apply our new method to predict cognitive degeneration and disease outcomes using the multimodal imaging genetic data from Alzheimer's Disease Neuroimaging Initiative cohort.
arXiv Detail & Related papers (2023-05-25T16:29:16Z) - Cross-Modality Brain Tumor Segmentation via Bidirectional
Global-to-Local Unsupervised Domain Adaptation [61.01704175938995]
In this paper, we propose a novel Bidirectional Global-to-Local (BiGL) adaptation framework under a UDA scheme.
Specifically, a bidirectional image synthesis and segmentation module is proposed to segment the brain tumor.
The proposed method outperforms several state-of-the-art unsupervised domain adaptation methods by a large margin.
arXiv Detail & Related papers (2021-05-17T10:11:45Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Towards Cross-modality Medical Image Segmentation with Online Mutual
Knowledge Distillation [71.89867233426597]
In this paper, we aim to exploit the prior knowledge learned from one modality to improve the segmentation performance on another modality.
We propose a novel Mutual Knowledge Distillation scheme to thoroughly exploit the modality-shared knowledge.
Experimental results on the public multi-class cardiac segmentation data, i.e., MMWHS 2017, show that our method achieves large improvements on CT segmentation.
arXiv Detail & Related papers (2020-10-04T10:25:13Z) - Learning joint segmentation of tissues and brain lesions from
task-specific hetero-modal domain-shifted datasets [6.049813979681482]
We propose a novel approach to build a joint tissue and lesion segmentation model from aggregated task-specific datasets.
We show how the expected risk can be decomposed and optimised empirically.
For each individual task, our joint approach reaches comparable performance to task-specific and fully-supervised models.
arXiv Detail & Related papers (2020-09-08T22:00:00Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.