Deep Learning based Multi-modal Computing with Feature Disentanglement
for MRI Image Synthesis
- URL: http://arxiv.org/abs/2105.02835v1
- Date: Thu, 6 May 2021 17:22:22 GMT
- Title: Deep Learning based Multi-modal Computing with Feature Disentanglement
for MRI Image Synthesis
- Authors: Yuchen Fei, Bo Zhan, Mei Hong, Xi Wu, Jiliu Zhou, Yan Wang
- Abstract summary: We propose a deep learning based multi-modal computing model for MRI synthesis with feature disentanglement strategy.
The proposed approach decomposes each input modality into modality-invariant space with shared information and modality-specific space with specific information.
To address the lack of specific information of the target modality in the test phase, a local adaptive fusion (LAF) module is adopted to generate a modality-like pseudo-target.
- Score: 8.363448006582065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Purpose: Different Magnetic resonance imaging (MRI) modalities of the same
anatomical structure are required to present different pathological information
from the physical level for diagnostic needs. However, it is often difficult to
obtain full-sequence MRI images of patients owing to limitations such as time
consumption and high cost. The purpose of this work is to develop an algorithm
for target MRI sequences prediction with high accuracy, and provide more
information for clinical diagnosis. Methods: We propose a deep learning based
multi-modal computing model for MRI synthesis with feature disentanglement
strategy. To take full advantage of the complementary information provided by
different modalities, multi-modal MRI sequences are utilized as input. Notably,
the proposed approach decomposes each input modality into modality-invariant
space with shared information and modality-specific space with specific
information, so that features are extracted separately to effectively process
the input data. Subsequently, both of them are fused through the adaptive
instance normalization (AdaIN) layer in the decoder. In addition, to address
the lack of specific information of the target modality in the test phase, a
local adaptive fusion (LAF) module is adopted to generate a modality-like
pseudo-target with specific information similar to the ground truth. Results:
To evaluate the synthesis performance, we verify our method on the BRATS2015
dataset of 164 subjects. The experimental results demonstrate our approach
significantly outperforms the benchmark method and other state-of-the-art
medical image synthesis methods in both quantitative and qualitative measures.
Compared with the pix2pixGANs method, the PSNR improves from 23.68 to 24.8.
Conclusion: The proposed method could be effective in prediction of target MRI
sequences, and useful for clinical diagnosis and treatment.
Related papers
- Leveraging Multimodal CycleGAN for the Generation of Anatomically Accurate Synthetic CT Scans from MRIs [1.779948689352186]
We analyse the capabilities of different configurations of Deep Learning models to generate synthetic CT scans from MRI.
Several CycleGAN models were trained unsupervised to generate CT scans from different MRI modalities with and without contrast agents.
The results show how, depending on the input modalities, the models can have very different performances.
arXiv Detail & Related papers (2024-07-15T16:38:59Z) - A Unified Framework for Synthesizing Multisequence Brain MRI via Hybrid Fusion [4.47838172826189]
We propose a novel unified framework for synthesizing multisequence MR images, called Hybrid Fusion GAN (HF-GAN)
We introduce a hybrid fusion encoder designed to ensure the disentangled extraction of complementary and modality-specific information.
Common feature representations are transformed into a target latent space via the modality infuser to synthesize missing MR sequences.
arXiv Detail & Related papers (2024-06-21T08:06:00Z) - NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Cross-modality Guidance-aided Multi-modal Learning with Dual Attention
for MRI Brain Tumor Grading [47.50733518140625]
Brain tumor represents one of the most fatal cancers around the world, and is very common in children and the elderly.
We propose a novel cross-modality guidance-aided multi-modal learning with dual attention for addressing the task of MRI brain tumor grading.
arXiv Detail & Related papers (2024-01-17T07:54:49Z) - A Compact Implicit Neural Representation for Efficient Storage of
Massive 4D Functional Magnetic Resonance Imaging [14.493622422645053]
fMRI compressing poses unique challenges due to its intricate temporal dynamics, low signal-to-noise ratio, and complicated underlying redundancies.
This paper reports a novel compression paradigm specifically tailored for fMRI data based on Implicit Neural Representation (INR)
arXiv Detail & Related papers (2023-11-30T05:54:37Z) - Source-Free Collaborative Domain Adaptation via Multi-Perspective
Feature Enrichment for Functional MRI Analysis [55.03872260158717]
Resting-state MRI functional (rs-fMRI) is increasingly employed in multi-site research to aid neurological disorder analysis.
Many methods have been proposed to reduce fMRI heterogeneity between source and target domains.
But acquiring source data is challenging due to concerns and/or data storage burdens in multi-site studies.
We design a source-free collaborative domain adaptation framework for fMRI analysis, where only a pretrained source model and unlabeled target data are accessible.
arXiv Detail & Related papers (2023-08-24T01:30:18Z) - K-Space-Aware Cross-Modality Score for Synthesized Neuroimage Quality
Assessment [71.27193056354741]
The problem of how to assess cross-modality medical image synthesis has been largely unexplored.
We propose a new metric K-CROSS to spur progress on this challenging problem.
K-CROSS uses a pre-trained multi-modality segmentation network to predict the lesion location.
arXiv Detail & Related papers (2023-07-10T01:26:48Z) - FAST-AID Brain: Fast and Accurate Segmentation Tool using Artificial
Intelligence Developed for Brain [0.8376091455761259]
A novel deep learning method is proposed for fast and accurate segmentation of the human brain into 132 regions.
The proposed model uses an efficient U-Net-like network and benefits from the intersection points of different views and hierarchical relations.
The proposed method can be applied to brain MRI data including skull or any other artifacts without preprocessing the images or a drop in performance.
arXiv Detail & Related papers (2022-08-30T16:06:07Z) - Cross-Modality Deep Feature Learning for Brain Tumor Segmentation [158.8192041981564]
This paper proposes a novel cross-modality deep feature learning framework to segment brain tumors from the multi-modality MRI data.
The core idea is to mine rich patterns across the multi-modality data to make up for the insufficient data scale.
Comprehensive experiments are conducted on the BraTS benchmarks, which show that the proposed cross-modality deep feature learning framework can effectively improve the brain tumor segmentation performance.
arXiv Detail & Related papers (2022-01-07T07:46:01Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.