SMU-Net: Style matching U-Net for brain tumor segmentation with missing
modalities
- URL: http://arxiv.org/abs/2204.02961v1
- Date: Wed, 6 Apr 2022 17:55:19 GMT
- Title: SMU-Net: Style matching U-Net for brain tumor segmentation with missing
modalities
- Authors: Reza Azad, Nika Khosravi, Dorit Merhof
- Abstract summary: We propose a style matching U-Net (SMU-Net) for brain tumour segmentation on MRI images.
Our co-training approach utilizes a content and style-matching mechanism to distill the informative features from the full-modality network into a missing modality network.
Our style matching module adaptively recalibrates the representation space by learning a matching function to transfer the informative and textural features from a full-modality path into a missing-modality path.
- Score: 4.855689194518905
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Gliomas are one of the most prevalent types of primary brain tumours,
accounting for more than 30\% of all cases and they develop from the glial stem
or progenitor cells. In theory, the majority of brain tumours could well be
identified exclusively by the use of Magnetic Resonance Imaging (MRI). Each MRI
modality delivers distinct information on the soft tissue of the human brain
and integrating all of them would provide comprehensive data for the accurate
segmentation of the glioma, which is crucial for the patient's prognosis,
diagnosis, and determining the best follow-up treatment. Unfortunately, MRI is
prone to artifacts for a variety of reasons, which might result in missing one
or more MRI modalities. Various strategies have been proposed over the years to
synthesize the missing modality or compensate for the influence it has on
automated segmentation models. However, these methods usually fail to model the
underlying missing information. In this paper, we propose a style matching
U-Net (SMU-Net) for brain tumour segmentation on MRI images. Our co-training
approach utilizes a content and style-matching mechanism to distill the
informative features from the full-modality network into a missing modality
network. To do so, we encode both full-modality and missing-modality data into
a latent space, then we decompose the representation space into a style and
content representation. Our style matching module adaptively recalibrates the
representation space by learning a matching function to transfer the
informative and textural features from a full-modality path into a
missing-modality path. Moreover, by modelling the mutual information, our
content module surpasses the less informative features and re-calibrates the
representation space based on discriminative semantic features. The evaluation
process on the BraTS 2018 dataset shows a significant results.
Related papers
- SegmentAnyBone: A Universal Model that Segments Any Bone at Any Location
on MRI [13.912230325828943]
We propose a versatile, publicly available deep-learning model for bone segmentation in MRI across multiple standard MRI locations.
The proposed model can operate in two modes: fully automated segmentation and prompt-based segmentation.
Our contributions include (1) collecting and annotating a new MRI dataset across various MRI protocols, encompassing over 300 annotated volumes and 8485 annotated slices across diverse anatomic regions.
arXiv Detail & Related papers (2024-01-23T18:59:25Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - M3AE: Multimodal Representation Learning for Brain Tumor Segmentation
with Missing Modalities [29.455215925816187]
Multimodal magnetic resonance imaging (MRI) provides complementary information for sub-region analysis of brain tumors.
It is common to have one or more modalities missing due to image corruption, artifacts, acquisition protocols, allergy to contrast agents, or simply cost.
We propose a novel two-stage framework for brain tumor segmentation with missing modalities.
arXiv Detail & Related papers (2023-03-09T14:54:30Z) - DIGEST: Deeply supervIsed knowledGE tranSfer neTwork learning for brain
tumor segmentation with incomplete multi-modal MRI scans [16.93394669748461]
Brain tumor segmentation based on multi-modal magnetic resonance imaging (MRI) plays a pivotal role in assisting brain cancer diagnosis, treatment, and postoperative evaluations.
Despite the achieved inspiring performance by existing automatic segmentation methods, multi-modal MRI data are still unavailable in real-world clinical applications.
We propose a Deeply supervIsed knowledGE tranSfer neTwork (DIGEST), which achieves accurate brain tumor segmentation under different modality-missing scenarios.
arXiv Detail & Related papers (2022-11-15T09:01:14Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Latent Correlation Representation Learning for Brain Tumor Segmentation
with Missing MRI Modalities [2.867517731896504]
Accurately segmenting brain tumor from MR images is the key to clinical diagnostics and treatment planning.
It's common to miss some imaging modalities in clinical practice.
We present a novel brain tumor segmentation algorithm with missing modalities.
arXiv Detail & Related papers (2021-04-13T14:21:09Z) - Does anatomical contextual information improve 3D U-Net based brain
tumor segmentation? [0.0]
It is investigated whether the addition of contextual information from the brain anatomy improves U-Net-based brain tumor segmentation.
The impact of adding contextual information was investigated in terms of overall segmentation accuracy, model training time, domain generalization, and compensation for fewer MR modalities available for each subject.
arXiv Detail & Related papers (2020-10-26T09:57:58Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - M2Net: Multi-modal Multi-channel Network for Overall Survival Time
Prediction of Brain Tumor Patients [151.4352001822956]
Early and accurate prediction of overall survival (OS) time can help to obtain better treatment planning for brain tumor patients.
Existing prediction methods rely on radiomic features at the local lesion area of a magnetic resonance (MR) volume.
We propose an end-to-end OS time prediction model; namely, Multi-modal Multi-channel Network (M2Net)
arXiv Detail & Related papers (2020-06-01T05:21:37Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.