Convergence of Nonconvex PnP-ADMM with MMSE Denoisers
- URL: http://arxiv.org/abs/2311.18810v1
- Date: Thu, 30 Nov 2023 18:52:47 GMT
- Title: Convergence of Nonconvex PnP-ADMM with MMSE Denoisers
- Authors: Chicago Park, Shirin Shoushtari, Weijie Gan, Ulugbek S. Kamilov
- Abstract summary: Plug-and-Play Alternating Direction Method of Multipliers (ADMM) is widely-used for physical measurement problems.
It has however been observed that.
ADMM often empirically converges even for expansive CNNs.
- Score: 8.034511587847158
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Plug-and-Play Alternating Direction Method of Multipliers (PnP-ADMM) is a
widely-used algorithm for solving inverse problems by integrating physical
measurement models and convolutional neural network (CNN) priors. PnP-ADMM has
been theoretically proven to converge for convex data-fidelity terms and
nonexpansive CNNs. It has however been observed that PnP-ADMM often empirically
converges even for expansive CNNs. This paper presents a theoretical
explanation for the observed stability of PnP-ADMM based on the interpretation
of the CNN prior as a minimum mean-squared error (MMSE) denoiser. Our
explanation parallels a similar argument recently made for the iterative
shrinkage/thresholding algorithm variant of PnP (PnP-ISTA) and relies on the
connection between MMSE denoisers and proximal operators. We also numerically
evaluate the performance gap between PnP-ADMM using a nonexpansive DnCNN
denoiser and expansive DRUNet denoiser, thus motivating the use of expansive
CNNs.
Related papers
- ProPINN: Demystifying Propagation Failures in Physics-Informed Neural Networks [71.02216400133858]
Physics-informed neural networks (PINNs) have earned high expectations in solving partial differential equations (PDEs)
Previous research observed the propagation failure phenomenon of PINNs.
This paper provides the first formal and in-depth study of propagation failure and its root cause.
arXiv Detail & Related papers (2025-02-02T13:56:38Z) - Deep-Unrolling Multidimensional Harmonic Retrieval Algorithms on Neuromorphic Hardware [78.17783007774295]
This paper explores the potential of conversion-based neuromorphic algorithms for highly accurate and energy-efficient single-snapshot multidimensional harmonic retrieval.
A novel method for converting the complex-valued convolutional layers and activations into spiking neural networks (SNNs) is developed.
The converted SNNs achieve almost five-fold power efficiency at moderate performance loss compared to the original CNNs.
arXiv Detail & Related papers (2024-12-05T09:41:33Z) - Prior Mismatch and Adaptation in PnP-ADMM with a Nonconvex Convergence
Analysis [20.63188897629508]
Plug-and-Play priors is a widely-used family methods for solving inverse imaging problems.
Deep methods have been shown to achieve state-of-the-art performance when the prior is obtained using powerful denoisers.
arXiv Detail & Related papers (2023-09-29T20:49:00Z) - On the Contractivity of Plug-and-Play Operators [11.218821754886514]
In noise-and-play regularization, the operator in algorithms such as ISTA and ADM is replaced by a powerfulrr.
This formal substitution works surprisingly well in practice.
In fact,.
has been shown to give state-of-the-art results for various imaging applications.
arXiv Detail & Related papers (2023-09-28T23:58:02Z) - Proximal denoiser for convergent plug-and-play optimization with
nonconvex regularization [7.0226402509856225]
Plug-and-Play () methods solve ill proximal-posed inverse problems through algorithms by replacing a neural network operator by a denoising operator.
We show that this denoiser actually correspond to a gradient function.
arXiv Detail & Related papers (2022-01-31T14:05:20Z) - Provably Efficient Neural Estimation of Structural Equation Model: An
Adversarial Approach [144.21892195917758]
We study estimation in a class of generalized Structural equation models (SEMs)
We formulate the linear operator equation as a min-max game, where both players are parameterized by neural networks (NNs), and learn the parameters of these neural networks using a gradient descent.
For the first time we provide a tractable estimation procedure for SEMs based on NNs with provable convergence and without the need for sample splitting.
arXiv Detail & Related papers (2020-07-02T17:55:47Z) - Belief Propagation Neural Networks [103.97004780313105]
We introduce belief propagation neural networks (BPNNs)
BPNNs operate on factor graphs and generalize Belief propagation (BP)
We show that BPNNs converges 1.7x faster on Ising models while providing tighter bounds.
On challenging model counting problems, BPNNs compute estimates 100's of times faster than state-of-the-art handcrafted methods.
arXiv Detail & Related papers (2020-07-01T07:39:51Z) - P-ADMMiRNN: Training RNN with Stable Convergence via An Efficient and
Paralleled ADMM Approach [17.603762011446843]
It is hard to train Recurrent Neural Network (RNN) with stable convergence and avoid gradient vanishing and exploding problems.
This work builds a new framework named ADMMiRNN upon the unfolded form of RNN to address the above challenges simultaneously.
arXiv Detail & Related papers (2020-06-10T02:43:11Z) - Scalable Plug-and-Play ADMM with Convergence Guarantees [24.957046830965822]
We propose an incremental variant of the widely used.
ADMM algorithm, making it scalable to large-scale datasets.
We theoretically analyze the convergence algorithm under a set explicit assumptions.
arXiv Detail & Related papers (2020-06-05T04:10:15Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z) - Q-GADMM: Quantized Group ADMM for Communication Efficient Decentralized Machine Learning [66.18202188565922]
We propose a communication-efficient decentralized machine learning (ML) algorithm, coined QGADMM (QGADMM)
We develop a novel quantization method to adaptively adjust modelization levels and their probabilities, while proving the convergence of QGADMM for convex functions.
arXiv Detail & Related papers (2019-10-23T10:47:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.