Recovery Analysis for Plug-and-Play Priors using the Restricted
Eigenvalue Condition
- URL: http://arxiv.org/abs/2106.03668v1
- Date: Mon, 7 Jun 2021 14:45:38 GMT
- Title: Recovery Analysis for Plug-and-Play Priors using the Restricted
Eigenvalue Condition
- Authors: Jiaming Liu, M. Salman Asif, Brendt Wohlberg, and Ulugbek S. Kamilov
- Abstract summary: We show how to establish theoretical recovery guarantees for the plug-and-play priors (noise) and regularization by denoising (RED) methods.
Our results suggest that models with a pre-trained artifact removal network provides significantly better results compared to existing state-of-the-art methods.
- Score: 48.08511796234349
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The plug-and-play priors (PnP) and regularization by denoising (RED) methods
have become widely used for solving inverse problems by leveraging pre-trained
deep denoisers as image priors. While the empirical imaging performance and the
theoretical convergence properties of these algorithms have been widely
investigated, their recovery properties have not previously been theoretically
analyzed. We address this gap by showing how to establish theoretical recovery
guarantees for PnP/RED by assuming that the solution of these methods lies near
the fixed-points of a deep neural network. We also present numerical results
comparing the recovery performance of PnP/RED in compressive sensing against
that of recent compressive sensing algorithms based on generative models. Our
numerical results suggest that PnP with a pre-trained artifact removal network
provides significantly better results compared to the existing state-of-the-art
methods.
Related papers
- Unrolled denoising networks provably learn optimal Bayesian inference [54.79172096306631]
We prove the first rigorous learning guarantees for neural networks based on unrolling approximate message passing (AMP)
For compressed sensing, we prove that when trained on data drawn from a product prior, the layers of the network converge to the same denoisers used in Bayes AMP.
arXiv Detail & Related papers (2024-09-19T17:56:16Z) - Pruning Deep Neural Networks from a Sparsity Perspective [34.22967841734504]
Pruning is often achieved by dropping redundant weights, neurons, or layers of a deep network while attempting to retain a comparable test performance.
We propose PQ Index (PQI) to measure the potential compressibility of deep neural networks and use this to develop a Sparsity-informed Adaptive Pruning (SAP) algorithm.
arXiv Detail & Related papers (2023-02-11T04:52:20Z) - Online Deep Equilibrium Learning for Regularization by Denoising [20.331171081002957]
Plug-and-Play Equilibrium Priors (memory) and Regularization by Denoising (RED) are widely-used frameworks for solving inverse imaging problems by computing fixed-points.
We propose ODER as a new strategy for improving the efficiency of DEQ/RED on the total number of measurements.
Our numerical results suggest the potential improvements in training/testing complexity due to ODER on three distinct imaging applications.
arXiv Detail & Related papers (2022-05-25T21:06:22Z) - Proximal denoiser for convergent plug-and-play optimization with
nonconvex regularization [7.0226402509856225]
Plug-and-Play () methods solve ill proximal-posed inverse problems through algorithms by replacing a neural network operator by a denoising operator.
We show that this denoiser actually correspond to a gradient function.
arXiv Detail & Related papers (2022-01-31T14:05:20Z) - On Maximum-a-Posteriori estimation with Plug & Play priors and
stochastic gradient descent [13.168923974530307]
Methods to solve imaging problems usually combine an explicit data likelihood function with a prior that explicitly expected properties of the solution.
In a departure from explicit modelling, several recent works have proposed and studied the use of implicit priors defined by an image denoising algorithm.
arXiv Detail & Related papers (2022-01-16T20:50:08Z) - Robust lEarned Shrinkage-Thresholding (REST): Robust unrolling for
sparse recover [87.28082715343896]
We consider deep neural networks for solving inverse problems that are robust to forward model mis-specifications.
We design a new robust deep neural network architecture by applying algorithm unfolding techniques to a robust version of the underlying recovery problem.
The proposed REST network is shown to outperform state-of-the-art model-based and data-driven algorithms in both compressive sensing and radar imaging problems.
arXiv Detail & Related papers (2021-10-20T06:15:45Z) - Kernel-Based Smoothness Analysis of Residual Networks [85.20737467304994]
Residual networks (ResNets) stand out among these powerful modern architectures.
In this paper, we show another distinction between the two models, namely, a tendency of ResNets to promote smoothers than gradients.
arXiv Detail & Related papers (2020-09-21T16:32:04Z) - Scalable Plug-and-Play ADMM with Convergence Guarantees [24.957046830965822]
We propose an incremental variant of the widely used.
ADMM algorithm, making it scalable to large-scale datasets.
We theoretically analyze the convergence algorithm under a set explicit assumptions.
arXiv Detail & Related papers (2020-06-05T04:10:15Z) - On the Convergence Rate of Projected Gradient Descent for a
Back-Projection based Objective [58.33065918353532]
We consider a back-projection based fidelity term as an alternative to the common least squares (LS)
We show that using the BP term, rather than the LS term, requires fewer iterations of optimization algorithms.
arXiv Detail & Related papers (2020-05-03T00:58:23Z) - Revisiting Initialization of Neural Networks [72.24615341588846]
We propose a rigorous estimation of the global curvature of weights across layers by approximating and controlling the norm of their Hessian matrix.
Our experiments on Word2Vec and the MNIST/CIFAR image classification tasks confirm that tracking the Hessian norm is a useful diagnostic tool.
arXiv Detail & Related papers (2020-04-20T18:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.