A Unified Conditional Disentanglement Framework for Multimodal Brain MR
Image Translation
- URL: http://arxiv.org/abs/2101.05434v1
- Date: Thu, 14 Jan 2021 03:14:24 GMT
- Title: A Unified Conditional Disentanglement Framework for Multimodal Brain MR
Image Translation
- Authors: Xiaofeng Liu, Fangxu Xing, Georges El Fakhri, Jonghye Woo
- Abstract summary: We propose a unified conditional disentanglement framework to synthesize any arbitrary modality from an input modality.
We validate our framework on four MRI modalities, including T1-weighted, T1 contrast enhanced, T2-weighted, and FLAIR MRI, from the BraTS'18 database.
- Score: 11.26646475512469
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Multimodal MRI provides complementary and clinically relevant information to
probe tissue condition and to characterize various diseases. However, it is
often difficult to acquire sufficiently many modalities from the same subject
due to limitations in study plans, while quantitative analysis is still
demanded. In this work, we propose a unified conditional disentanglement
framework to synthesize any arbitrary modality from an input modality. Our
framework hinges on a cycle-constrained conditional adversarial training
approach, where it can extract a modality-invariant anatomical feature with a
modality-agnostic encoder and generate a target modality with a conditioned
decoder. We validate our framework on four MRI modalities, including
T1-weighted, T1 contrast enhanced, T2-weighted, and FLAIR MRI, from the
BraTS'18 database, showing superior performance on synthesis quality over the
comparison methods. In addition, we report results from experiments on a tumor
segmentation task carried out with synthesized data.
Related papers
- A Unified Framework for Synthesizing Multisequence Brain MRI via Hybrid Fusion [4.47838172826189]
We propose a novel unified framework for synthesizing multisequence MR images, called Hybrid Fusion GAN (HF-GAN)
We introduce a hybrid fusion encoder designed to ensure the disentangled extraction of complementary and modality-specific information.
Common feature representations are transformed into a target latent space via the modality infuser to synthesize missing MR sequences.
arXiv Detail & Related papers (2024-06-21T08:06:00Z) - Disentangled Multimodal Brain MR Image Translation via Transformer-based
Modality Infuser [12.402947207350394]
We propose a transformer-based modality infuser designed to synthesize multimodal brain MR images.
In our method, we extract modality-agnostic features from the encoder and then transform them into modality-specific features.
We carried out experiments on the BraTS 2018 dataset, translating between four MR modalities.
arXiv Detail & Related papers (2024-02-01T06:34:35Z) - The Brain Tumor Segmentation (BraTS) Challenge 2023: Brain MR Image
Synthesis for Tumor Segmentation (BraSyn) [5.399839183476989]
We present the establishment of the Brain MR Image Synthesis Benchmark (BraSyn) in conjunction with the Medical Image Computing and Computer-Assisted Intervention (MICCAI) 2023.
The primary objective of this challenge is to evaluate image synthesis methods that can realistically generate missing MRI modalities when multiple available images are provided.
arXiv Detail & Related papers (2023-05-15T20:49:58Z) - A Novel Unified Conditional Score-based Generative Framework for
Multi-modal Medical Image Completion [54.512440195060584]
We propose the Unified Multi-Modal Conditional Score-based Generative Model (UMM-CSGM) to take advantage of Score-based Generative Model (SGM)
UMM-CSGM employs a novel multi-in multi-out Conditional Score Network (mm-CSN) to learn a comprehensive set of cross-modal conditional distributions.
Experiments on BraTS19 dataset show that the UMM-CSGM can more reliably synthesize the heterogeneous enhancement and irregular area in tumor-induced lesions.
arXiv Detail & Related papers (2022-07-07T16:57:21Z) - Fast T2w/FLAIR MRI Acquisition by Optimal Sampling of Information
Complementary to Pre-acquired T1w MRI [52.656075914042155]
We propose an iterative framework to optimize the under-sampling pattern for MRI acquisition of another modality.
We have demonstrated superior performance of our learned under-sampling patterns on a public dataset.
arXiv Detail & Related papers (2021-11-11T04:04:48Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Deep Learning based Multi-modal Computing with Feature Disentanglement
for MRI Image Synthesis [8.363448006582065]
We propose a deep learning based multi-modal computing model for MRI synthesis with feature disentanglement strategy.
The proposed approach decomposes each input modality into modality-invariant space with shared information and modality-specific space with specific information.
To address the lack of specific information of the target modality in the test phase, a local adaptive fusion (LAF) module is adopted to generate a modality-like pseudo-target.
arXiv Detail & Related papers (2021-05-06T17:22:22Z) - Confidence-guided Lesion Mask-based Simultaneous Synthesis of Anatomic
and Molecular MR Images in Patients with Post-treatment Malignant Gliomas [65.64363834322333]
Confidence Guided SAMR (CG-SAMR) synthesizes data from lesion information to multi-modal anatomic sequences.
module guides the synthesis based on confidence measure about the intermediate results.
experiments on real clinical data demonstrate that the proposed model can perform better than the state-of-theart synthesis methods.
arXiv Detail & Related papers (2020-08-06T20:20:22Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z) - Multi-Modality Generative Adversarial Networks with Tumor Consistency
Loss for Brain MR Image Synthesis [30.64847799586407]
We propose a multi-modality generative adversarial network (MGAN) to synthesize three high-quality MR modalities (FLAIR, T1 and T1ce) from one MR modality T2 simultaneously.
The experimental results show that the quality of the synthesized images is better than the one synthesized by the baseline model, pix2pix.
arXiv Detail & Related papers (2020-05-02T21:33:15Z) - Robust Multimodal Brain Tumor Segmentation via Feature Disentanglement
and Gated Fusion [71.87627318863612]
We propose a novel multimodal segmentation framework which is robust to the absence of imaging modalities.
Our network uses feature disentanglement to decompose the input modalities into the modality-specific appearance code.
We validate our method on the important yet challenging multimodal brain tumor segmentation task with the BRATS challenge dataset.
arXiv Detail & Related papers (2020-02-22T14:32:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.