Edge-Enhanced Dual Discriminator Generative Adversarial Network for Fast
MRI with Parallel Imaging Using Multi-view Information
- URL: http://arxiv.org/abs/2112.05758v1
- Date: Fri, 10 Dec 2021 10:49:26 GMT
- Title: Edge-Enhanced Dual Discriminator Generative Adversarial Network for Fast
MRI with Parallel Imaging Using Multi-view Information
- Authors: Jiahao Huang, Weiping Ding, Jun Lv, Jingwen Yang, Hao Dong, Javier Del
Ser, Jun Xia, Tiaojuan Ren, Stephen Wong, Guang Yang
- Abstract summary: We introduce a novel parallel imaging coupled dual discriminator generative adversarial network (PIDD-GAN) for fast multi-channel MRI reconstruction.
One discriminator is used for holistic image reconstruction, whereas the other one is responsible for enhancing edge information.
Results show that our PIDD-GAN provides high-quality reconstructed MR images, with well-preserved edge information.
- Score: 10.616409735438756
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In clinical medicine, magnetic resonance imaging (MRI) is one of the most
important tools for diagnosis, triage, prognosis, and treatment planning.
However, MRI suffers from an inherent slow data acquisition process because
data is collected sequentially in k-space. In recent years, most MRI
reconstruction methods proposed in the literature focus on holistic image
reconstruction rather than enhancing the edge information. This work steps
aside this general trend by elaborating on the enhancement of edge information.
Specifically, we introduce a novel parallel imaging coupled dual discriminator
generative adversarial network (PIDD-GAN) for fast multi-channel MRI
reconstruction by incorporating multi-view information. The dual discriminator
design aims to improve the edge information in MRI reconstruction. One
discriminator is used for holistic image reconstruction, whereas the other one
is responsible for enhancing edge information. An improved U-Net with local and
global residual learning is proposed for the generator. Frequency channel
attention blocks (FCA Blocks) are embedded in the generator for incorporating
attention mechanisms. Content loss is introduced to train the generator for
better reconstruction quality. We performed comprehensive experiments on
Calgary-Campinas public brain MR dataset and compared our method with
state-of-the-art MRI reconstruction methods. Ablation studies of residual
learning were conducted on the MICCAI13 dataset to validate the proposed
modules. Results show that our PIDD-GAN provides high-quality reconstructed MR
images, with well-preserved edge information. The time of single-image
reconstruction is below 5ms, which meets the demand of faster processing.
Related papers
- Volumetric Reconstruction Resolves Off-Resonance Artifacts in Static and
Dynamic PROPELLER MRI [76.60362295758596]
Off-resonance artifacts in magnetic resonance imaging (MRI) are visual distortions that occur when the actual resonant frequencies of spins within the imaging volume differ from the expected frequencies used to encode spatial information.
We propose to resolve these artifacts by lifting the 2D MRI reconstruction problem to 3D, introducing an additional "spectral" dimension to model this off-resonance.
arXiv Detail & Related papers (2023-11-22T05:44:51Z) - CMRxRecon: An open cardiac MRI dataset for the competition of
accelerated image reconstruction [62.61209705638161]
There has been growing interest in deep learning-based CMR imaging algorithms.
Deep learning methods require large training datasets.
This dataset includes multi-contrast, multi-view, multi-slice and multi-coil CMR imaging data from 300 subjects.
arXiv Detail & Related papers (2023-09-19T15:14:42Z) - Attention Hybrid Variational Net for Accelerated MRI Reconstruction [7.046523233290946]
The application of compressed sensing (CS)-enabled data reconstruction for accelerating magnetic resonance imaging (MRI) remains a challenging problem.
This is due to the fact that the information lost in k-space from the acceleration mask makes it difficult to reconstruct an image similar to the quality of a fully sampled image.
We propose a deep learning-based attention hybrid variational network that performs learning in both the k-space and image domain.
arXiv Detail & Related papers (2023-06-21T16:19:07Z) - Iterative Data Refinement for Self-Supervised MR Image Reconstruction [18.02961646651716]
We propose a data refinement framework for self-supervised MR image reconstruction.
We first analyze the reason of the performance gap between self-supervised and supervised methods.
Then, we design an effective self-supervised training data refinement method to reduce this data bias.
arXiv Detail & Related papers (2022-11-24T06:57:16Z) - Model-Guided Multi-Contrast Deep Unfolding Network for MRI
Super-resolution Reconstruction [68.80715727288514]
We show how to unfold an iterative MGDUN algorithm into a novel model-guided deep unfolding network by taking the MRI observation matrix.
In this paper, we propose a novel Model-Guided interpretable Deep Unfolding Network (MGDUN) for medical image SR reconstruction.
arXiv Detail & Related papers (2022-09-15T03:58:30Z) - A Long Short-term Memory Based Recurrent Neural Network for
Interventional MRI Reconstruction [50.1787181309337]
We propose a convolutional long short-term memory (Conv-LSTM) based recurrent neural network (RNN), or ConvLR, to reconstruct interventional images with golden-angle radial sampling.
The proposed algorithm has the potential to achieve real-time i-MRI for DBS and can be used for general purpose MR-guided intervention.
arXiv Detail & Related papers (2022-03-28T14:03:45Z) - Multi-modal Aggregation Network for Fast MR Imaging [85.25000133194762]
We propose a novel Multi-modal Aggregation Network, named MANet, which is capable of discovering complementary representations from a fully sampled auxiliary modality.
In our MANet, the representations from the fully sampled auxiliary and undersampled target modalities are learned independently through a specific network.
Our MANet follows a hybrid domain learning framework, which allows it to simultaneously recover the frequency signal in the $k$-space domain.
arXiv Detail & Related papers (2021-10-15T13:16:59Z) - Generative Adversarial Networks (GAN) Powered Fast Magnetic Resonance
Imaging -- Mini Review, Comparison and Perspectives [5.3148259096171175]
One drawback of MRI is its comparatively slow scanning and reconstruction compared to other image modalities.
Deep Neural Networks (DNNs) have been used in sparse MRI reconstruction models to recreate relatively high-quality images.
Generative Adversarial Networks (GAN) based methods are proposed to solve fast MRI with enhanced image perceptual quality.
arXiv Detail & Related papers (2021-05-04T23:59:00Z) - Adaptive Gradient Balancing for UndersampledMRI Reconstruction and
Image-to-Image Translation [60.663499381212425]
We enhance the image quality by using a Wasserstein Generative Adversarial Network combined with a novel Adaptive Gradient Balancing technique.
In MRI, our method minimizes artifacts, while maintaining a high-quality reconstruction that produces sharper images than other techniques.
arXiv Detail & Related papers (2021-04-05T13:05:22Z) - DuDoRNet: Learning a Dual-Domain Recurrent Network for Fast MRI
Reconstruction with Deep T1 Prior [19.720518236653195]
We propose a Dual Domain Recurrent Network (DuDoRNet) with deep T1 embedded to simultaneously recover k-space and images.
Our method consistently outperforms state-of-the-art methods, and can reconstruct high-quality MRI.
arXiv Detail & Related papers (2020-01-11T21:34:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.