Tuning-free multi-coil compressed sensing MRI with Parallel Variable
Density Approximate Message Passing (P-VDAMP)
- URL: http://arxiv.org/abs/2203.04180v1
- Date: Tue, 8 Mar 2022 16:11:41 GMT
- Title: Tuning-free multi-coil compressed sensing MRI with Parallel Variable
Density Approximate Message Passing (P-VDAMP)
- Authors: Charles Millard, Mark Chiew, Jared Tanner, Aaron T. Hess and Boris
Mailhe
- Abstract summary: The Parallel Variable Density Approximate Message Passing (P-VDAMP) algorithm is proposed.
State evolution is leveraged to automatically tune sparse parameters on-the-fly with Stein's Unbiased Risk Estimate (SURE)
The proposed method is found to have a similar reconstruction quality and time to convergence as FISTA with an optimally tuned sparse weighting.
- Score: 2.624902795082451
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Purpose: To develop a tuning-free method for multi-coil compressed sensing
MRI that performs competitively with algorithms with an optimally tuned sparse
parameter.
Theory: The Parallel Variable Density Approximate Message Passing (P-VDAMP)
algorithm is proposed. For Bernoulli random variable density sampling, P-VDAMP
obeys a "state evolution", where the intermediate per-iteration image estimate
is distributed according to the ground truth corrupted by a Gaussian vector
with approximately known covariance. State evolution is leveraged to
automatically tune sparse parameters on-the-fly with Stein's Unbiased Risk
Estimate (SURE).
Methods: P-VDAMP is evaluated on brain, knee and angiogram datasets at
acceleration factors 5 and 10 and compared with four variants of the Fast
Iterative Shrinkage-Thresholding algorithm (FISTA), including two tuning-free
variants from the literature.
Results: The proposed method is found to have a similar reconstruction
quality and time to convergence as FISTA with an optimally tuned sparse
weighting.
Conclusions: P-VDAMP is an efficient, robust and principled method for
on-the-fly parameter tuning that is competitive with optimally tuned FISTA and
offers substantial robustness and reconstruction quality improvements over
competing tuning-free methods.
Related papers
- Gradient Normalization with(out) Clipping Ensures Convergence of Nonconvex SGD under Heavy-Tailed Noise with Improved Results [60.92029979853314]
This paper investigates Gradient Normalization without (NSGDC) its gradient reduction variant (NSGDC-VR)
We present significant improvements in the theoretical results for both algorithms.
arXiv Detail & Related papers (2024-10-21T22:40:42Z) - An Adaptive Re-evaluation Method for Evolution Strategy under Additive Noise [3.92625489118339]
We propose a novel method to adaptively choose the optimal re-evaluation number for function values corrupted by additive Gaussian white noise.
We experimentally compare our method to the state-of-the-art noise-handling methods for CMA-ES on a set of artificial test functions.
arXiv Detail & Related papers (2024-09-25T09:10:21Z) - Variance-Reducing Couplings for Random Features [57.73648780299374]
Random features (RFs) are a popular technique to scale up kernel methods in machine learning.
We find couplings to improve RFs defined on both Euclidean and discrete input spaces.
We reach surprising conclusions about the benefits and limitations of variance reduction as a paradigm.
arXiv Detail & Related papers (2024-05-26T12:25:09Z) - Adaptive Online Bayesian Estimation of Frequency Distributions with Local Differential Privacy [0.4604003661048266]
We propose a novel approach for the adaptive and online estimation of the frequency distribution of a finite number of categories under the local differential privacy (LDP) framework.
The proposed algorithm performs Bayesian parameter estimation via posterior sampling and adapts the randomization mechanism for LDP based on the obtained posterior samples.
We provide a theoretical analysis showing that (i) the posterior distribution targeted by the algorithm converges to the true parameter even for approximate posterior sampling, and (ii) the algorithm selects the optimal subset with high probability if posterior sampling is performed exactly.
arXiv Detail & Related papers (2024-05-11T13:59:52Z) - Plug-and-Play split Gibbs sampler: embedding deep generative priors in
Bayesian inference [12.91637880428221]
This paper introduces a plug-and-play sampling algorithm that leverages variable splitting to efficiently sample from a posterior distribution.
It divides the challenging task of posterior sampling into two simpler sampling problems.
Its performance is compared to recent state-of-the-art optimization and sampling methods.
arXiv Detail & Related papers (2023-04-21T17:17:51Z) - Robust Quantitative Susceptibility Mapping via Approximate Message
Passing with Parameter Estimation [14.22930572798757]
We propose a probabilistic Bayesian approach for quantitative susceptibility mapping (QSM) with built-in parameter estimation.
On the simulated Sim2Snr1 dataset, AMP-PE achieved the lowest NRMSE, DFCM and the highest SSIM.
On the in vivo datasets, AMP-PE is robust and successfully recovers the susceptibility maps using the estimated parameters.
arXiv Detail & Related papers (2022-07-29T14:38:03Z) - Alternating Learning Approach for Variational Networks and Undersampling
Pattern in Parallel MRI Applications [0.9558392439655014]
We propose an alternating learning approach to learn the sampling pattern (SP) and the parameters of variational networks (VN) in accelerated parallel magnetic resonance imaging (MRI)
The proposed approach was stable and learned effective SPs with the corresponding VN parameters that produce images with better quality than other approaches.
arXiv Detail & Related papers (2021-10-27T18:42:03Z) - Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and
Beyond [63.59034509960994]
We study shuffling-based variants: minibatch and local Random Reshuffling, which draw gradients without replacement.
For smooth functions satisfying the Polyak-Lojasiewicz condition, we obtain convergence bounds which show that these shuffling-based variants converge faster than their with-replacement counterparts.
We propose an algorithmic modification called synchronized shuffling that leads to convergence rates faster than our lower bounds in near-homogeneous settings.
arXiv Detail & Related papers (2021-10-20T02:25:25Z) - Variational Refinement for Importance Sampling Using the Forward
Kullback-Leibler Divergence [77.06203118175335]
Variational Inference (VI) is a popular alternative to exact sampling in Bayesian inference.
Importance sampling (IS) is often used to fine-tune and de-bias the estimates of approximate Bayesian inference procedures.
We propose a novel combination of optimization and sampling techniques for approximate Bayesian inference.
arXiv Detail & Related papers (2021-06-30T11:00:24Z) - Sampling-free Variational Inference for Neural Networks with
Multiplicative Activation Noise [51.080620762639434]
We propose a more efficient parameterization of the posterior approximation for sampling-free variational inference.
Our approach yields competitive results for standard regression problems and scales well to large-scale image classification tasks.
arXiv Detail & Related papers (2021-03-15T16:16:18Z) - Plug-And-Play Learned Gaussian-mixture Approximate Message Passing [71.74028918819046]
We propose a plug-and-play compressed sensing (CS) recovery algorithm suitable for any i.i.d. source prior.
Our algorithm builds upon Borgerding's learned AMP (LAMP), yet significantly improves it by adopting a universal denoising function within the algorithm.
Numerical evaluation shows that the L-GM-AMP algorithm achieves state-of-the-art performance without any knowledge of the source prior.
arXiv Detail & Related papers (2020-11-18T16:40:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.