Multi-modal Brain Tumor Segmentation via Missing Modality Synthesis and
Modality-level Attention Fusion
- URL: http://arxiv.org/abs/2203.04586v1
- Date: Wed, 9 Mar 2022 09:08:48 GMT
- Title: Multi-modal Brain Tumor Segmentation via Missing Modality Synthesis and
Modality-level Attention Fusion
- Authors: Ziqi Huang, Li Lin, Pujin Cheng, Linkai Peng, Xiaoying Tang
- Abstract summary: We propose an end-to-end framework named Modality-Level Attention Fusion Network (MAF-Net)
Our proposed MAF-Net is found to yield superior T1ce synthesis performance and accurate brain tumor segmentation.
- Score: 3.9562534927482704
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Multi-modal magnetic resonance (MR) imaging provides great potential for
diagnosing and analyzing brain gliomas. In clinical scenarios, common MR
sequences such as T1, T2 and FLAIR can be obtained simultaneously in a single
scanning process. However, acquiring contrast enhanced modalities such as T1ce
requires additional time, cost, and injection of contrast agent. As such, it is
clinically meaningful to develop a method to synthesize unavailable modalities
which can also be used as additional inputs to downstream tasks (e.g., brain
tumor segmentation) for performance enhancing. In this work, we propose an
end-to-end framework named Modality-Level Attention Fusion Network (MAF-Net),
wherein we innovatively conduct patchwise contrastive learning for extracting
multi-modal latent features and dynamically assigning attention weights to fuse
different modalities. Through extensive experiments on BraTS2020, our proposed
MAF-Net is found to yield superior T1ce synthesis performance (SSIM of 0.8879
and PSNR of 22.78) and accurate brain tumor segmentation (mean Dice scores of
67.9%, 41.8% and 88.0% on segmenting the tumor core, enhancing tumor and whole
tumor).
Related papers
- Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Automated ensemble method for pediatric brain tumor segmentation [0.0]
This study introduces a novel ensemble approach using ONet and modified versions of UNet.
Data augmentation ensures robustness and accuracy across different scanning protocols.
Results indicate that this advanced ensemble approach offers promising prospects for enhanced diagnostic accuracy.
arXiv Detail & Related papers (2023-08-14T15:29:32Z) - A New Deep Hybrid Boosted and Ensemble Learning-based Brain Tumor
Analysis using MRI [0.28675177318965034]
Two-phase deep learning-based framework is proposed to detect and categorize brain tumors in magnetic resonance images (MRIs)
In the first phase, a novel deep boosted features and ensemble classifiers (DBF-EC) scheme is proposed to detect tumor MRI images from healthy individuals effectively.
In the second phase, a new hybrid features fusion-based brain tumor classification approach is proposed, comprised of dynamic-static feature and ML classifier to categorize different tumor types.
arXiv Detail & Related papers (2022-01-14T10:24:47Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Feature-enhanced Generation and Multi-modality Fusion based Deep Neural
Network for Brain Tumor Segmentation with Missing MR Modalities [2.867517731896504]
The main problem is that not all types of MRIs are always available in clinical exams.
We propose a novel brain tumor segmentation network in the case of missing one or more modalities.
The proposed network consists of three sub-networks: a feature-enhanced generator, a correlation constraint block and a segmentation network.
arXiv Detail & Related papers (2021-11-08T10:59:40Z) - H2NF-Net for Brain Tumor Segmentation using Multimodal MR Imaging: 2nd
Place Solution to BraTS Challenge 2020 Segmentation Task [96.49879910148854]
Our H2NF-Net uses the single and cascaded HNF-Nets to segment different brain tumor sub-regions.
We trained and evaluated our model on the Multimodal Brain Tumor Challenge (BraTS) 2020 dataset.
Our method won the second place in the BraTS 2020 challenge segmentation task out of nearly 80 participants.
arXiv Detail & Related papers (2020-12-30T20:44:55Z) - Soft Tissue Sarcoma Co-Segmentation in Combined MRI and PET/CT Data [2.2515303891664358]
Tumor segmentation in multimodal medical images has seen a growing trend towards deep learning based methods.
We propose a simultaneous co-segmentation method, which enables multimodal feature learning through modality-specific encoder and decoder branches.
We demonstrate the effectiveness of our approach on public soft tissue sarcoma data, which comprises MRI (T1 and T2 sequence) and PET/CT scans.
arXiv Detail & Related papers (2020-08-28T09:15:42Z) - Confidence-guided Lesion Mask-based Simultaneous Synthesis of Anatomic
and Molecular MR Images in Patients with Post-treatment Malignant Gliomas [65.64363834322333]
Confidence Guided SAMR (CG-SAMR) synthesizes data from lesion information to multi-modal anatomic sequences.
module guides the synthesis based on confidence measure about the intermediate results.
experiments on real clinical data demonstrate that the proposed model can perform better than the state-of-theart synthesis methods.
arXiv Detail & Related papers (2020-08-06T20:20:22Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z) - Segmentation of the Myocardium on Late-Gadolinium Enhanced MRI based on
2.5 D Residual Squeeze and Excitation Deep Learning Model [55.09533240649176]
The aim of this work is to develop an accurate automatic segmentation method based on deep learning models for the myocardial borders on LGE-MRI.
A total number of 320 exams (with a mean number of 6 slices per exam) were used for training and 28 exams used for testing.
The performance analysis of the proposed ensemble model in the basal and middle slices was similar as compared to intra-observer study and slightly lower at apical slices.
arXiv Detail & Related papers (2020-05-27T20:44:38Z) - Multi-Modality Generative Adversarial Networks with Tumor Consistency
Loss for Brain MR Image Synthesis [30.64847799586407]
We propose a multi-modality generative adversarial network (MGAN) to synthesize three high-quality MR modalities (FLAIR, T1 and T1ce) from one MR modality T2 simultaneously.
The experimental results show that the quality of the synthesized images is better than the one synthesized by the baseline model, pix2pix.
arXiv Detail & Related papers (2020-05-02T21:33:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.