MR-Contrast-Aware Image-to-Image Translations with Generative
Adversarial Networks
- URL: http://arxiv.org/abs/2104.01449v1
- Date: Sat, 3 Apr 2021 17:05:13 GMT
- Title: MR-Contrast-Aware Image-to-Image Translations with Generative
Adversarial Networks
- Authors: Jonas Denck, Jens Guehring, Andreas Maier, Eva Rothgang
- Abstract summary: We train an image-to-image generative adversarial network conditioned on the MR acquisition parameters repetition time and echo time.
Our approach yields a peak signal-to-noise ratio and structural similarity of 24.48 and 0.66, surpassing the pix2pix benchmark model significantly.
- Score: 5.3580471186206005
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Purpose
A Magnetic Resonance Imaging (MRI) exam typically consists of several
sequences that yield different image contrasts. Each sequence is parameterized
through multiple acquisition parameters that influence image contrast,
signal-to-noise ratio, acquisition time, and/or resolution. Depending on the
clinical indication, different contrasts are required by the radiologist to
make a diagnosis. As MR sequence acquisition is time consuming and acquired
images may be corrupted due to motion, a method to synthesize MR images with
adjustable contrast properties is required.
Methods
Therefore, we trained an image-to-image generative adversarial network
conditioned on the MR acquisition parameters repetition time and echo time. Our
approach is motivated by style transfer networks, whereas the "style" for an
image is explicitly given in our case, as it is determined by the MR
acquisition parameters our network is conditioned on.
Results
This enables us to synthesize MR images with adjustable image contrast. We
evaluated our approach on the fastMRI dataset, a large set of publicly
available MR knee images, and show that our method outperforms a benchmark
pix2pix approach in the translation of non-fat-saturated MR images to
fat-saturated images. Our approach yields a peak signal-to-noise ratio and
structural similarity of 24.48 and 0.66, surpassing the pix2pix benchmark model
significantly.
Conclusion
Our model is the first that enables fine-tuned contrast synthesis, which can
be used to synthesize missing MR contrasts or as a data augmentation technique
for AI training in MRI.
Related papers
- A Unified Model for Compressed Sensing MRI Across Undersampling Patterns [69.19631302047569]
Deep neural networks have shown great potential for reconstructing high-fidelity images from undersampled measurements.
Our model is based on neural operators, a discretization-agnostic architecture.
Our inference speed is also 1,400x faster than diffusion methods.
arXiv Detail & Related papers (2024-10-05T20:03:57Z) - Joint Edge Optimization Deep Unfolding Network for Accelerated MRI Reconstruction [3.9681863841849623]
We build a joint edge optimization model that not only incorporates individual regularizers specific to both the MR image and the edges, but also enforces a co-regularizer to effectively establish a stronger correlation between them.
Specifically, the edge information is defined through a non-edge probability map to guide the image reconstruction during the optimization process.
Meanwhile, the regularizers pertaining to images and edges are incorporated into a deep unfolding network to automatically learn their respective inherent a-priori information.
arXiv Detail & Related papers (2024-05-09T05:51:33Z) - High-fidelity Direct Contrast Synthesis from Magnetic Resonance
Fingerprinting [28.702553164811473]
We propose a supervised learning-based method that directly synthesizes contrast-weighted images from the MRF data without going through the quantitative mapping and spin-dynamics simulation.
In-vivo experiments demonstrate excellent image quality compared to simulation-based contrast synthesis and previous DCS methods, both visually as well as by quantitative metrics.
arXiv Detail & Related papers (2022-12-21T07:11:39Z) - JoJoNet: Joint-contrast and Joint-sampling-and-reconstruction Network
for Multi-contrast MRI [49.29851365978476]
The proposed framework consists of a sampling mask generator for each image contrast and a reconstructor exploiting the inter-contrast correlations with a recurrent structure.
The acceleration ratio of each image contrast is also learnable and can be driven by a downstream task performance.
arXiv Detail & Related papers (2022-10-22T20:46:56Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - Deep Learning-Based MR Image Re-parameterization [0.0]
We propose a novel deep learning (DL) based convolutional model for MRI re- parameterization.
Based on our preliminary results, DL-based techniques hold the potential to learn the non-linearities that govern the re- parameterization.
arXiv Detail & Related papers (2022-06-11T12:39:37Z) - A Long Short-term Memory Based Recurrent Neural Network for
Interventional MRI Reconstruction [50.1787181309337]
We propose a convolutional long short-term memory (Conv-LSTM) based recurrent neural network (RNN), or ConvLR, to reconstruct interventional images with golden-angle radial sampling.
The proposed algorithm has the potential to achieve real-time i-MRI for DBS and can be used for general purpose MR-guided intervention.
arXiv Detail & Related papers (2022-03-28T14:03:45Z) - Multi-Modal MRI Reconstruction with Spatial Alignment Network [51.74078260367654]
In clinical practice, magnetic resonance imaging (MRI) with multiple contrasts is usually acquired in a single study.
Recent researches demonstrate that, considering the redundancy between different contrasts or modalities, a target MRI modality under-sampled in the k-space can be better reconstructed with the helps from a fully-sampled sequence.
In this paper, we integrate the spatial alignment network with reconstruction, to improve the quality of the reconstructed target modality.
arXiv Detail & Related papers (2021-08-12T08:46:35Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - Enhanced Magnetic Resonance Image Synthesis with Contrast-Aware
Generative Adversarial Networks [5.3580471186206005]
We trained a generative adversarial network (GAN) to generate synthetic MR knee images conditioned on various acquisition parameters.
In a Turing test, two experts mislabeled 40.5% of real and synthetic MR images, demonstrating that the image quality of the generated synthetic and real MR images is comparable.
arXiv Detail & Related papers (2021-02-17T11:39:36Z) - Dual-cycle Constrained Bijective VAE-GAN For Tagged-to-Cine Magnetic
Resonance Image Synthesis [11.697141493937021]
We propose a novel VAE-GAN approach to carry out tagged-to-cine MR image synthesis.
Our framework has been trained, validated, and tested using 1,768, 416, and 1,560 subject-independent paired slices of tagged and cine MRI.
arXiv Detail & Related papers (2021-01-14T03:27:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.