Latent Correlation Representation Learning for Brain Tumor Segmentation
with Missing MRI Modalities
- URL: http://arxiv.org/abs/2104.06231v1
- Date: Tue, 13 Apr 2021 14:21:09 GMT
- Title: Latent Correlation Representation Learning for Brain Tumor Segmentation
with Missing MRI Modalities
- Authors: Tongxue Zhou, St\ephane Canu, Pierre Vera, Su Ruan
- Abstract summary: Accurately segmenting brain tumor from MR images is the key to clinical diagnostics and treatment planning.
It's common to miss some imaging modalities in clinical practice.
We present a novel brain tumor segmentation algorithm with missing modalities.
- Score: 2.867517731896504
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Magnetic Resonance Imaging (MRI) is a widely used imaging technique to assess
brain tumor. Accurately segmenting brain tumor from MR images is the key to
clinical diagnostics and treatment planning. In addition, multi-modal MR images
can provide complementary information for accurate brain tumor segmentation.
However, it's common to miss some imaging modalities in clinical practice. In
this paper, we present a novel brain tumor segmentation algorithm with missing
modalities. Since it exists a strong correlation between multi-modalities, a
correlation model is proposed to specially represent the latent multi-source
correlation. Thanks to the obtained correlation representation, the
segmentation becomes more robust in the case of missing modality. First, the
individual representation produced by each encoder is used to estimate the
modality independent parameter. Then, the correlation model transforms all the
individual representations to the latent multi-source correlation
representations. Finally, the correlation representations across modalities are
fused via attention mechanism into a shared representation to emphasize the
most important features for segmentation. We evaluate our model on BraTS 2018
and BraTS 2019 dataset, it outperforms the current state-of-the-art methods and
produces robust results when one or more modalities are missing.
Related papers
- Towards General Text-guided Image Synthesis for Customized Multimodal Brain MRI Generation [51.28453192441364]
Multimodal brain magnetic resonance (MR) imaging is indispensable in neuroscience and neurology.
Current MR image synthesis approaches are typically trained on independent datasets for specific tasks.
We present TUMSyn, a Text-guided Universal MR image Synthesis model, which can flexibly generate brain MR images.
arXiv Detail & Related papers (2024-09-25T11:14:47Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - Exploiting Partial Common Information Microstructure for Multi-Modal
Brain Tumor Segmentation [11.583406152227637]
Learning with multiple modalities is crucial for automated brain tumor segmentation from magnetic resonance imaging data.
Existing approaches are oblivious to partial common information shared by subsets of the modalities.
In this paper, we show that identifying such partial common information can significantly boost the discriminative power of image segmentation models.
arXiv Detail & Related papers (2023-02-06T01:28:52Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - SMU-Net: Style matching U-Net for brain tumor segmentation with missing
modalities [4.855689194518905]
We propose a style matching U-Net (SMU-Net) for brain tumour segmentation on MRI images.
Our co-training approach utilizes a content and style-matching mechanism to distill the informative features from the full-modality network into a missing modality network.
Our style matching module adaptively recalibrates the representation space by learning a matching function to transfer the informative and textural features from a full-modality path into a missing-modality path.
arXiv Detail & Related papers (2022-04-06T17:55:19Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Feature-enhanced Generation and Multi-modality Fusion based Deep Neural
Network for Brain Tumor Segmentation with Missing MR Modalities [2.867517731896504]
The main problem is that not all types of MRIs are always available in clinical exams.
We propose a novel brain tumor segmentation network in the case of missing one or more modalities.
The proposed network consists of three sub-networks: a feature-enhanced generator, a correlation constraint block and a segmentation network.
arXiv Detail & Related papers (2021-11-08T10:59:40Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Brain tumor segmentation with missing modalities via latent multi-source
correlation representation [6.060020806741279]
A novel correlation representation block is proposed to specially discover the latent multi-source correlation.
Thanks to the obtained correlation representation, the segmentation becomes more robust in the case of missing modalities.
We evaluate our model on BraTS 2018 datasets, it outperforms the current state-of-the-art method and produces robust results when one or more modalities are missing.
arXiv Detail & Related papers (2020-03-19T15:47:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.