Learning Multi-Modal Brain Tumor Segmentation from Privileged
Semi-Paired MRI Images with Curriculum Disentanglement Learning
- URL: http://arxiv.org/abs/2208.12781v1
- Date: Fri, 26 Aug 2022 16:52:43 GMT
- Title: Learning Multi-Modal Brain Tumor Segmentation from Privileged
Semi-Paired MRI Images with Curriculum Disentanglement Learning
- Authors: Zecheng Liu and Jia Wei and Rui Li
- Abstract summary: We present a novel two-step (intra-modality and inter-modality) curriculum disentanglement learning framework for brain tumor segmentation.
In the first step, we propose to conduct reconstruction and segmentation with augmented intra-modality style-consistent images.
In the second step, the model jointly performs reconstruction, unsupervised/supervised translation, and segmentation for both unpaired and paired inter-modality images.
- Score: 4.43142018105102
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the difficulties of obtaining multimodal paired images in clinical
practice, recent studies propose to train brain tumor segmentation models with
unpaired images and capture complementary information through modality
translation. However, these models cannot fully exploit the complementary
information from different modalities. In this work, we thus present a novel
two-step (intra-modality and inter-modality) curriculum disentanglement
learning framework to effectively utilize privileged semi-paired images, i.e.
limited paired images that are only available in training, for brain tumor
segmentation. Specifically, in the first step, we propose to conduct
reconstruction and segmentation with augmented intra-modality style-consistent
images. In the second step, the model jointly performs reconstruction,
unsupervised/supervised translation, and segmentation for both unpaired and
paired inter-modality images. A content consistency loss and a supervised
translation loss are proposed to leverage complementary information from
different modalities in this step. Through these two steps, our method
effectively extracts modality-specific style codes describing the attenuation
of tissue features and image contrast, and modality-invariant content codes
containing anatomical and functional information from the input images.
Experiments on three brain tumor segmentation tasks show that our model
outperforms competing segmentation models based on unpaired images.
Related papers
- Enhancing Incomplete Multi-modal Brain Tumor Segmentation with Intra-modal Asymmetry and Inter-modal Dependency [31.047259264831947]
A common problem in practice is the unavailability of some modalities due to varying scanning protocols and patient conditions.
Previous methods have attempted to address this by fusing accessible multi-modal features, leveraging attention mechanisms, and synthesizing missing modalities.
We propose a novel approach that enhances the deep learning-based brain tumor segmentation model from two perspectives.
arXiv Detail & Related papers (2024-06-14T16:54:53Z) - Cross-model Mutual Learning for Exemplar-based Medical Image Segmentation [25.874281336821685]
Cross-model Mutual learning framework for Exemplar-based Medical image (CMEMS)
We introduce a novel Cross-model Mutual learning framework for Exemplar-based Medical image (CMEMS)
arXiv Detail & Related papers (2024-04-18T00:18:07Z) - Dual-scale Enhanced and Cross-generative Consistency Learning for Semi-supervised Medical Image Segmentation [49.57907601086494]
Medical image segmentation plays a crucial role in computer-aided diagnosis.
We propose a novel Dual-scale Enhanced and Cross-generative consistency learning framework for semi-supervised medical image (DEC-Seg)
arXiv Detail & Related papers (2023-12-26T12:56:31Z) - M-GenSeg: Domain Adaptation For Target Modality Tumor Segmentation With
Annotation-Efficient Supervision [4.023899199756184]
M-GenSeg is a new semi-supervised generative training strategy for cross-modality tumor segmentation.
We evaluate the performance on a brain tumor segmentation dataset composed of four different contrast sequences.
Unlike the prior art, M-GenSeg also introduces the ability to train with a partially annotated source modality.
arXiv Detail & Related papers (2022-12-14T15:19:06Z) - Unsupervised Image Registration Towards Enhancing Performance and
Explainability in Cardiac And Brain Image Analysis [3.5718941645696485]
Inter- and intra-modality affine and non-rigid image registration is an essential medical image analysis process in clinical imaging.
We present an un-supervised deep learning registration methodology which can accurately model affine and non-rigid trans-formations.
Our methodology performs bi-directional cross-modality image synthesis to learn modality-invariant latent rep-resentations.
arXiv Detail & Related papers (2022-03-07T12:54:33Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Representation Disentanglement for Multi-modal MR Analysis [15.498244253687337]
Recent works have suggested that multi-modal deep learning analysis can benefit from explicitly disentangling anatomical (shape) and modality (appearance) representations from the images.
We propose a margin loss that regularizes the similarity relationships of the representations across subjects and modalities.
To enable a robust training, we introduce a modified conditional convolution to design a single model for encoding images of all modalities.
arXiv Detail & Related papers (2021-02-23T02:08:38Z) - Few-shot Medical Image Segmentation using a Global Correlation Network
with Discriminative Embedding [60.89561661441736]
We propose a novel method for few-shot medical image segmentation.
We construct our few-shot image segmentor using a deep convolutional network trained episodically.
We enhance discriminability of deep embedding to encourage clustering of the feature domains of the same class.
arXiv Detail & Related papers (2020-12-10T04:01:07Z) - Learning Deformable Image Registration from Optimization: Perspective,
Modules, Bilevel Training and Beyond [62.730497582218284]
We develop a new deep learning based framework to optimize a diffeomorphic model via multi-scale propagation.
We conduct two groups of image registration experiments on 3D volume datasets including image-to-atlas registration on brain MRI data and image-to-image registration on liver CT data.
arXiv Detail & Related papers (2020-04-30T03:23:45Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.