High-fidelity Direct Contrast Synthesis from Magnetic Resonance
Fingerprinting
- URL: http://arxiv.org/abs/2212.10817v1
- Date: Wed, 21 Dec 2022 07:11:39 GMT
- Title: High-fidelity Direct Contrast Synthesis from Magnetic Resonance
Fingerprinting
- Authors: Ke Wang, Mariya Doneva, Jakob Meineke, Thomas Amthor, Ekin Karasan,
Fei Tan, Jonathan I. Tamir, Stella X. Yu, and Michael Lustig
- Abstract summary: We propose a supervised learning-based method that directly synthesizes contrast-weighted images from the MRF data without going through the quantitative mapping and spin-dynamics simulation.
In-vivo experiments demonstrate excellent image quality compared to simulation-based contrast synthesis and previous DCS methods, both visually as well as by quantitative metrics.
- Score: 28.702553164811473
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Magnetic Resonance Fingerprinting (MRF) is an efficient quantitative MRI
technique that can extract important tissue and system parameters such as T1,
T2, B0, and B1 from a single scan. This property also makes it attractive for
retrospectively synthesizing contrast-weighted images. In general,
contrast-weighted images like T1-weighted, T2-weighted, etc., can be
synthesized directly from parameter maps through spin-dynamics simulation
(i.e., Bloch or Extended Phase Graph models). However, these approaches often
exhibit artifacts due to imperfections in the mapping, the sequence modeling,
and the data acquisition. Here we propose a supervised learning-based method
that directly synthesizes contrast-weighted images from the MRF data without
going through the quantitative mapping and spin-dynamics simulation. To
implement our direct contrast synthesis (DCS) method, we deploy a conditional
Generative Adversarial Network (GAN) framework and propose a multi-branch U-Net
as the generator. The input MRF data are used to directly synthesize
T1-weighted, T2-weighted, and fluid-attenuated inversion recovery (FLAIR)
images through supervised training on paired MRF and target spin echo-based
contrast-weighted scans. In-vivo experiments demonstrate excellent image
quality compared to simulation-based contrast synthesis and previous DCS
methods, both visually as well as by quantitative metrics. We also demonstrate
cases where our trained model is able to mitigate in-flow and spiral
off-resonance artifacts that are typically seen in MRF reconstructions and thus
more faithfully represent conventional spin echo-based contrast-weighted
images.
Related papers
- Gadolinium dose reduction for brain MRI using conditional deep learning [66.99830668082234]
Two main challenges for these approaches are the accurate prediction of contrast enhancement and the synthesis of realistic images.
We address both challenges by utilizing the contrast signal encoded in the subtraction images of pre-contrast and post-contrast image pairs.
We demonstrate the effectiveness of our approach on synthetic and real datasets using various scanners, field strengths, and contrast agents.
arXiv Detail & Related papers (2024-03-06T08:35:29Z) - Simulation of acquisition shifts in T2 Flair MR images to stress test AI
segmentation networks [0.0]
The approach simulates "acquisition shift derivatives" of MR images based on MR signal equations.
Experiments comprise the validation of the simulated images by real MR scans and example stress tests on state-of-the-art MS lesion segmentation networks.
arXiv Detail & Related papers (2023-11-03T13:10:55Z) - Optimizing Sampling Patterns for Compressed Sensing MRI with Diffusion
Generative Models [75.52575380824051]
We present a learning method to optimize sub-sampling patterns for compressed sensing multi-coil MRI.
We use a single-step reconstruction based on the posterior mean estimate given by the diffusion model and the MRI measurement process.
Our method requires as few as five training images to learn effective sampling patterns.
arXiv Detail & Related papers (2023-06-05T22:09:06Z) - An Attentive-based Generative Model for Medical Image Synthesis [18.94900480135376]
We propose an attention-based dual contrast generative model, called ADC-cycleGAN, which can synthesize medical images from unpaired data with multiple slices.
The model integrates a dual contrast loss term with the CycleGAN loss to ensure that the synthesized images are distinguishable from the source domain.
Experimental results demonstrate that the proposed ADC-cycleGAN model produces comparable samples to other state-of-the-art generative models.
arXiv Detail & Related papers (2023-06-02T14:17:37Z) - Generalizable synthetic MRI with physics-informed convolutional networks [57.628770497971246]
We develop a physics-informed deep learning-based method to synthesize multiple brain magnetic resonance imaging (MRI) contrasts from a single five-minute acquisition.
We investigate its ability to generalize to arbitrary contrasts to accelerate neuroimaging protocols.
arXiv Detail & Related papers (2023-05-21T21:16:20Z) - DC-cycleGAN: Bidirectional CT-to-MR Synthesis from Unpaired Data [22.751911825379626]
We propose a bidirectional learning model, denoted as dual contrast cycleGAN (DC-cycleGAN), to synthesis medical images from unpaired data.
The experimental results indicate that DC-cycleGAN is able to produce promising results as compared with other cycleGAN-based medical image synthesis methods.
arXiv Detail & Related papers (2022-11-02T17:16:28Z) - Harmonizing Pathological and Normal Pixels for Pseudo-healthy Synthesis [68.5287824124996]
We present a new type of discriminator, the segmentor, to accurately locate the lesions and improve the visual quality of pseudo-healthy images.
We apply the generated images into medical image enhancement and utilize the enhanced results to cope with the low contrast problem.
Comprehensive experiments on the T2 modality of BraTS demonstrate that the proposed method substantially outperforms the state-of-the-art methods.
arXiv Detail & Related papers (2022-03-29T08:41:17Z) - MR-Contrast-Aware Image-to-Image Translations with Generative
Adversarial Networks [5.3580471186206005]
We train an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition time and echo time.
Our approach yields a peak signal-to-noise ratio and structural similarity of 24.48 and 0.66, surpassing the pix2pix benchmark model significantly.
arXiv Detail & Related papers (2021-04-03T17:05:13Z) - Enhanced Magnetic Resonance Image Synthesis with Contrast-Aware
Generative Adversarial Networks [5.3580471186206005]
We trained a generative adversarial network (GAN) to generate synthetic MR knee images conditioned on various acquisition parameters.
In a Turing test, two experts mislabeled 40.5% of real and synthetic MR images, demonstrating that the image quality of the generated synthetic and real MR images is comparable.
arXiv Detail & Related papers (2021-02-17T11:39:36Z) - Confidence-guided Lesion Mask-based Simultaneous Synthesis of Anatomic
and Molecular MR Images in Patients with Post-treatment Malignant Gliomas [65.64363834322333]
Confidence Guided SAMR (CG-SAMR) synthesizes data from lesion information to multi-modal anatomic sequences.
module guides the synthesis based on confidence measure about the intermediate results.
experiments on real clinical data demonstrate that the proposed model can perform better than the state-of-theart synthesis methods.
arXiv Detail & Related papers (2020-08-06T20:20:22Z) - Lesion Mask-based Simultaneous Synthesis of Anatomic and MolecularMR
Images using a GAN [59.60954255038335]
The proposed framework consists of a stretch-out up-sampling module, a brain atlas encoder, a segmentation consistency module, and multi-scale label-wise discriminators.
Experiments on real clinical data demonstrate that the proposed model can perform significantly better than the state-of-the-art synthesis methods.
arXiv Detail & Related papers (2020-06-26T02:50:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.