Scalable Non-Cartesian Magnetic Resonance Imaging with R2D2
- URL: http://arxiv.org/abs/2403.17905v3
- Date: Tue, 28 May 2024 10:02:12 GMT
- Title: Scalable Non-Cartesian Magnetic Resonance Imaging with R2D2
- Authors: Yiwei Chen, Chao Tang, Amir Aghabiglou, Chung San Chu, Yves Wiaux,
- Abstract summary: We propose a new approach for non-esian magnetic resonance image reconstruction.
We leverage the "Residual to-Residual DNN series for high range imaging (R2D2)"
- Score: 6.728969294264806
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We propose a new approach for non-Cartesian magnetic resonance image reconstruction. While unrolled architectures provide robustness via data-consistency layers, embedding measurement operators in Deep Neural Network (DNN) can become impractical at large scale. Alternative Plug-and-Play (PnP) approaches, where the denoising DNNs are blind to the measurement setting, are not affected by this limitation and have also proven effective, but their highly iterative nature also affects scalability. To address this scalability challenge, we leverage the "Residual-to-Residual DNN series for high-Dynamic range imaging (R2D2)" approach recently introduced in astronomical imaging. R2D2's reconstruction is formed as a series of residual images, iteratively estimated as outputs of DNNs taking the previous iteration's image estimate and associated data residual as inputs. The method can be interpreted as a learned version of the Matching Pursuit algorithm. We demonstrate R2D2 in simulation, considering radial k-space sampling acquisition sequences. Our preliminary results suggest that R2D2 achieves: (i) suboptimal performance compared to its unrolled incarnation R2D2-Net, which is however non-scalable due to the necessary embedding of NUFFT-based data-consistency layers; (ii) superior reconstruction quality to a scalable version of R2D2-Net embedding an FFT-based approximation for data consistency; (iii) superior reconstruction quality to PnP, while only requiring few iterations.
Related papers
- R2D2 image reconstruction with model uncertainty quantification in radio astronomy [1.7249361224827533]
The Residual-to-Residual'' (R2D2) approach was recently introduced for Radio-Interferometric (RI) imaging in astronomy.
R2D2's reconstruction is formed as a series of residual images, iteratively estimated as outputs of Deep Neural Networks (DNNs)
We study the robustness of the R2D2 image estimation process, by studying the uncertainty associated with its series of learned models.
arXiv Detail & Related papers (2024-03-26T19:10:08Z) - The R2D2 deep neural network series paradigm for fast precision imaging in radio astronomy [1.7249361224827533]
Recent image reconstruction techniques have remarkable capability for imaging precision, well beyond CLEAN's capability.
We introduce a novel deep learning approach, dubbed "Residual-to-Residual DNN series for high-Dynamic range imaging"
R2D2's capability to deliver high precision is demonstrated in simulation, across a variety image observation settings using the Very Large Array (VLA)
arXiv Detail & Related papers (2024-03-08T16:57:54Z) - Rotational Augmented Noise2Inverse for Low-dose Computed Tomography
Reconstruction [83.73429628413773]
Supervised deep learning methods have shown the ability to remove noise in images but require accurate ground truth.
We propose a novel self-supervised framework for LDCT, in which ground truth is not required for training the convolutional neural network (CNN)
Numerical and experimental results show that the reconstruction accuracy of N2I with sparse views is degrading while the proposed rotational augmented Noise2Inverse (RAN2I) method keeps better image quality over a different range of sampling angles.
arXiv Detail & Related papers (2023-12-19T22:40:51Z) - StableDreamer: Taming Noisy Score Distillation Sampling for Text-to-3D [88.66678730537777]
We present StableDreamer, a methodology incorporating three advances.
First, we formalize the equivalence of the SDS generative prior and a simple supervised L2 reconstruction loss.
Second, our analysis shows that while image-space diffusion contributes to geometric precision, latent-space diffusion is crucial for vivid color rendition.
arXiv Detail & Related papers (2023-12-02T02:27:58Z) - CLEANing Cygnus A deep and fast with R2D2 [1.7249361224827533]
A novel deep learning paradigm for synthesis imaging by radio interferometry in astronomy was recently proposed, dubbed "Residual-to-Residual DNN series for high-Dynamic range imaging" (R2D2)
We show that R2D2's learning approach enables delivering high-precision imaging, superseding the resolution of CLEAN, and matching the precision of modern optimization and plug-and-play algorithms, respectively uSARA and AIRI.
arXiv Detail & Related papers (2023-09-06T18:11:09Z) - AliasNet: Alias Artefact Suppression Network for Accelerated
Phase-Encode MRI [4.752084030395196]
Sparse reconstruction is an important aspect of MRI, helping to reduce acquisition time and improve spatial-temporal resolution.
Experiments conducted on retrospectively under-sampled brain and knee data demonstrate that combination of the proposed 1D AliasNet modules with existing 2D deep learned (DL) recovery techniques leads to an improvement in image quality.
arXiv Detail & Related papers (2023-02-17T13:16:17Z) - Deep network series for large-scale high-dynamic range imaging [2.3759432635713895]
We propose a new approach for large-scale high-dynamic range computational imaging.
Deep Neural Networks (DNNs) trained end-to-end can solve linear inverse imaging problems almost instantaneously.
Alternative Plug-and-Play approaches have proven effective to address high-dynamic range challenges, but rely on highly iterative algorithms.
arXiv Detail & Related papers (2022-10-28T11:13:41Z) - A New Backbone for Hyperspectral Image Reconstruction [90.48427561874402]
3D hyperspectral image (HSI) reconstruction refers to inverse process of snapshot compressive imaging.
Proposal is for a Spatial/Spectral Invariant Residual U-Net, namely SSI-ResU-Net.
We show that SSI-ResU-Net achieves competing performance with over 77.3% reduction in terms of floating-point operations.
arXiv Detail & Related papers (2021-08-17T16:20:51Z) - PR-RRN: Pairwise-Regularized Residual-Recursive Networks for Non-rigid
Structure-from-Motion [58.75694870260649]
PR-RRN is a novel neural-network based method for Non-rigid Structure-from-Motion.
We propose two new pairwise regularizations to further regularize the reconstruction.
Our approach achieves state-of-the-art performance on CMU MOCAP and PASCAL3D+ dataset.
arXiv Detail & Related papers (2021-08-17T08:39:02Z) - Data Consistent CT Reconstruction from Insufficient Data with Learned
Prior Images [70.13735569016752]
We investigate the robustness of deep learning in CT image reconstruction by showing false negative and false positive lesion cases.
We propose a data consistent reconstruction (DCR) method to improve their image quality, which combines the advantages of compressed sensing and deep learning.
The efficacy of the proposed method is demonstrated in cone-beam CT with truncated data, limited-angle data and sparse-view data, respectively.
arXiv Detail & Related papers (2020-05-20T13:30:49Z) - Unlimited Resolution Image Generation with R2D2-GANs [69.90258455164513]
We present a novel simulation technique for generating high quality images of any predefined resolution.
This method can be used to synthesize sonar scans of size equivalent to those collected during a full-length mission.
The data produced is continuous, realistically-looking, and can also be generated at least two times faster than the real speed of acquisition.
arXiv Detail & Related papers (2020-03-02T17:49:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.