Scalable Plug-and-Play ADMM with Convergence Guarantees
- URL: http://arxiv.org/abs/2006.03224v2
- Date: Fri, 22 Jan 2021 14:42:06 GMT
- Title: Scalable Plug-and-Play ADMM with Convergence Guarantees
- Authors: Yu Sun, Zihui Wu, Xiaojian Xu, Brendt Wohlberg, and Ulugbek S. Kamilov
- Abstract summary: We propose an incremental variant of the widely used.
ADMM algorithm, making it scalable to large-scale datasets.
We theoretically analyze the convergence algorithm under a set explicit assumptions.
- Score: 24.957046830965822
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Plug-and-play priors (PnP) is a broadly applicable methodology for solving
inverse problems by exploiting statistical priors specified as denoisers.
Recent work has reported the state-of-the-art performance of PnP algorithms
using pre-trained deep neural nets as denoisers in a number of imaging
applications. However, current PnP algorithms are impractical in large-scale
settings due to their heavy computational and memory requirements. This work
addresses this issue by proposing an incremental variant of the widely used
PnP-ADMM algorithm, making it scalable to large-scale datasets. We
theoretically analyze the convergence of the algorithm under a set of explicit
assumptions, extending recent theoretical results in the area. Additionally, we
show the effectiveness of our algorithm with nonsmooth data-fidelity terms and
deep neural net priors, its fast convergence compared to existing PnP
algorithms, and its scalability in terms of speed and memory.
Related papers
- A Unified Plug-and-Play Algorithm with Projected Landweber Operator for Split Convex Feasibility Problems [6.185478918618347]
In recent years Plug-and-Play methods have achieved state-resolution-of-the-art performance in inverse imaging problems by replacing operators with denoisers.
Applying methods with theoretically guaranteed step sizes is difficult, and algorithms are limited to noise.
Project Landweber Operator (PLOPLO) is proposed to address these issues.
arXiv Detail & Related papers (2024-08-22T03:29:51Z) - Fast, Scalable, Warm-Start Semidefinite Programming with Spectral
Bundling and Sketching [53.91395791840179]
We present Unified Spectral Bundling with Sketching (USBS), a provably correct, fast and scalable algorithm for solving massive SDPs.
USBS provides a 500x speed-up over the state-of-the-art scalable SDP solver on an instance with over 2 billion decision variables.
arXiv Detail & Related papers (2023-12-19T02:27:22Z) - Unfolded proximal neural networks for robust image Gaussian denoising [7.018591019975253]
We propose a unified framework to build PNNs for the Gaussian denoising task, based on both the dual-FB and the primal-dual Chambolle-Pock algorithms.
We also show that accelerated versions of these algorithms enable skip connections in the associated NN layers.
arXiv Detail & Related papers (2023-08-06T15:32:16Z) - A relaxed proximal gradient descent algorithm for convergent
plug-and-play with proximal denoiser [6.2484576862659065]
This paper presents a new convergent Plug-and-fidelity Descent (Play) algorithm.
The algorithm converges for a wider range of regular convexization parameters, thus allowing more accurate restoration of an image.
arXiv Detail & Related papers (2023-01-31T16:11:47Z) - Improved Algorithms for Neural Active Learning [74.89097665112621]
We improve the theoretical and empirical performance of neural-network(NN)-based active learning algorithms for the non-parametric streaming setting.
We introduce two regret metrics by minimizing the population loss that are more suitable in active learning than the one used in state-of-the-art (SOTA) related work.
arXiv Detail & Related papers (2022-10-02T05:03:38Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Recovery Analysis for Plug-and-Play Priors using the Restricted
Eigenvalue Condition [48.08511796234349]
We show how to establish theoretical recovery guarantees for the plug-and-play priors (noise) and regularization by denoising (RED) methods.
Our results suggest that models with a pre-trained artifact removal network provides significantly better results compared to existing state-of-the-art methods.
arXiv Detail & Related papers (2021-06-07T14:45:38Z) - Bayesian imaging using Plug & Play priors: when Langevin meets Tweedie [13.476505672245603]
This paper develops theory, methods, and provably convergent algorithms for performing Bayesian inference with priors.
We introduce two algorithms: 1) -ULA (Unadjusted Langevin) Algorithm inference for Monte Carlo sampling and MMSE; and 2) quantitative-SGD (Stochastic Gradient Descent) for inference.
The algorithms are demonstrated on several problems such as image denoisering, inpainting, and denoising, where they are used for point estimation as well as for uncertainty visualisation and regularity.
arXiv Detail & Related papers (2021-03-08T12:46:53Z) - TFPnP: Tuning-free Plug-and-Play Proximal Algorithm with Applications to
Inverse Imaging Problems [22.239477171296056]
Plug-and-Play (MM) is a non- optimization framework that combines numerical algorithms, for example, with advanced denoising priors.
We discuss several practical considerations of more denoisers, which together with our learned strategies are state-of-the-art results.
arXiv Detail & Related papers (2020-11-18T14:19:30Z) - Approximation Algorithms for Sparse Principal Component Analysis [57.5357874512594]
Principal component analysis (PCA) is a widely used dimension reduction technique in machine learning and statistics.
Various approaches to obtain sparse principal direction loadings have been proposed, which are termed Sparse Principal Component Analysis.
We present thresholding as a provably accurate, time, approximation algorithm for the SPCA problem.
arXiv Detail & Related papers (2020-06-23T04:25:36Z) - Communication-Efficient Distributed Stochastic AUC Maximization with
Deep Neural Networks [50.42141893913188]
We study a distributed variable for large-scale AUC for a neural network as with a deep neural network.
Our model requires a much less number of communication rounds and still a number of communication rounds in theory.
Our experiments on several datasets show the effectiveness of our theory and also confirm our theory.
arXiv Detail & Related papers (2020-05-05T18:08:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.