A Learnable Variational Model for Joint Multimodal MRI Reconstruction
and Synthesis
- URL: http://arxiv.org/abs/2204.03804v1
- Date: Fri, 8 Apr 2022 01:35:19 GMT
- Title: A Learnable Variational Model for Joint Multimodal MRI Reconstruction
and Synthesis
- Authors: Wanyu Bian, Qingchao Zhang, Xiaojing Ye, Yunmei Chen
- Abstract summary: We propose a novel deep-learning model for joint reconstruction and synthesis of multi-modal MRI.
The output of our model includes reconstructed images of the source modalities and high-quality image synthesized in the target modality.
- Score: 4.056490719080639
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generating multi-contrasts/modal MRI of the same anatomy enriches diagnostic
information but is limited in practice due to excessive data acquisition time.
In this paper, we propose a novel deep-learning model for joint reconstruction
and synthesis of multi-modal MRI using incomplete k-space data of several
source modalities as inputs. The output of our model includes reconstructed
images of the source modalities and high-quality image synthesized in the
target modality. Our proposed model is formulated as a variational problem that
leverages several learnable modality-specific feature extractors and a
multimodal synthesis module. We propose a learnable optimization algorithm to
solve this model, which induces a multi-phase network whose parameters can be
trained using multi-modal MRI data. Moreover, a bilevel-optimization framework
is employed for robust parameter training. We demonstrate the effectiveness of
our approach using extensive numerical experiments.
Related papers
- NeuroPictor: Refining fMRI-to-Image Reconstruction via Multi-individual Pretraining and Multi-level Modulation [55.51412454263856]
This paper proposes to directly modulate the generation process of diffusion models using fMRI signals.
By training with about 67,000 fMRI-image pairs from various individuals, our model enjoys superior fMRI-to-image decoding capacity.
arXiv Detail & Related papers (2024-03-27T02:42:52Z) - Disentangled Multimodal Brain MR Image Translation via Transformer-based
Modality Infuser [12.402947207350394]
We propose a transformer-based modality infuser designed to synthesize multimodal brain MR images.
In our method, we extract modality-agnostic features from the encoder and then transform them into modality-specific features.
We carried out experiments on the BraTS 2018 dataset, translating between four MR modalities.
arXiv Detail & Related papers (2024-02-01T06:34:35Z) - Deep Unfolding Convolutional Dictionary Model for Multi-Contrast MRI
Super-resolution and Reconstruction [23.779641808300596]
We propose a multi-contrast convolutional dictionary (MC-CDic) model under the guidance of the optimization algorithm.
We employ the proximal gradient algorithm to optimize the model and unroll the iterative steps into a deep CDic model.
Experimental results demonstrate the superior performance of the proposed MC-CDic model against existing SOTA methods.
arXiv Detail & Related papers (2023-09-03T13:18:59Z) - Unified Multi-Modal Image Synthesis for Missing Modality Imputation [23.681228202899984]
We propose a novel unified multi-modal image synthesis method for missing modality imputation.
The proposed method is effective in handling various synthesis tasks and shows superior performance compared to previous methods.
arXiv Detail & Related papers (2023-04-11T16:59:15Z) - CoLa-Diff: Conditional Latent Diffusion Model for Multi-Modal MRI
Synthesis [11.803971719704721]
Most diffusion-based MRI synthesis models are using a single modality.
We propose the first diffusion-based multi-modality MRI synthesis model, namely Conditioned Latent Diffusion Model (CoLa-Diff)
Our experiments demonstrate that CoLa-Diff outperforms other state-of-the-art MRI synthesis methods.
arXiv Detail & Related papers (2023-03-24T15:46:10Z) - A Novel Unified Conditional Score-based Generative Framework for
Multi-modal Medical Image Completion [54.512440195060584]
We propose the Unified Multi-Modal Conditional Score-based Generative Model (UMM-CSGM) to take advantage of Score-based Generative Model (SGM)
UMM-CSGM employs a novel multi-in multi-out Conditional Score Network (mm-CSN) to learn a comprehensive set of cross-modal conditional distributions.
Experiments on BraTS19 dataset show that the UMM-CSGM can more reliably synthesize the heterogeneous enhancement and irregular area in tumor-induced lesions.
arXiv Detail & Related papers (2022-07-07T16:57:21Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Modality Completion via Gaussian Process Prior Variational Autoencoders
for Multi-Modal Glioma Segmentation [75.58395328700821]
We propose a novel model, Multi-modal Gaussian Process Prior Variational Autoencoder (MGP-VAE), to impute one or more missing sub-modalities for a patient scan.
MGP-VAE can leverage the Gaussian Process (GP) prior on the Variational Autoencoder (VAE) to utilize the subjects/patients and sub-modalities correlations.
We show the applicability of MGP-VAE on brain tumor segmentation where either, two, or three of four sub-modalities may be missing.
arXiv Detail & Related papers (2021-07-07T19:06:34Z) - Modeling Shared Responses in Neuroimaging Studies through MultiView ICA [94.31804763196116]
Group studies involving large cohorts of subjects are important to draw general conclusions about brain functional organization.
We propose a novel MultiView Independent Component Analysis model for group studies, where data from each subject are modeled as a linear combination of shared independent sources plus noise.
We demonstrate the usefulness of our approach first on fMRI data, where our model demonstrates improved sensitivity in identifying common sources among subjects.
arXiv Detail & Related papers (2020-06-11T17:29:53Z) - Hi-Net: Hybrid-fusion Network for Multi-modal MR Image Synthesis [143.55901940771568]
We propose a novel Hybrid-fusion Network (Hi-Net) for multi-modal MR image synthesis.
In our Hi-Net, a modality-specific network is utilized to learn representations for each individual modality.
A multi-modal synthesis network is designed to densely combine the latent representation with hierarchical features from each modality.
arXiv Detail & Related papers (2020-02-11T08:26:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.