Convergent Bregman Plug-and-Play Image Restoration for Poisson Inverse
Problems
- URL: http://arxiv.org/abs/2306.03466v1
- Date: Tue, 6 Jun 2023 07:36:47 GMT
- Title: Convergent Bregman Plug-and-Play Image Restoration for Poisson Inverse
Problems
- Authors: Samuel Hurault, Ulugbek Kamilov, Arthur Leclaire, Nicolas Papadakis
- Abstract summary: Plug-noise-and-Play (Play) methods are efficient iterative algorithms for solving illposed image inverse problems.
We propose two.
algorithms based on the Bregman Score gradient Denoise inverse problems.
- Score: 8.673558396669806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Plug-and-Play (PnP) methods are efficient iterative algorithms for solving
ill-posed image inverse problems. PnP methods are obtained by using deep
Gaussian denoisers instead of the proximal operator or the gradient-descent
step within proximal algorithms. Current PnP schemes rely on data-fidelity
terms that have either Lipschitz gradients or closed-form proximal operators,
which is not applicable to Poisson inverse problems. Based on the observation
that the Gaussian noise is not the adequate noise model in this setting, we
propose to generalize PnP using theBregman Proximal Gradient (BPG) method. BPG
replaces the Euclidean distance with a Bregman divergence that can better
capture the smoothness properties of the problem. We introduce the Bregman
Score Denoiser specifically parametrized and trained for the new Bregman
geometry and prove that it corresponds to the proximal operator of a nonconvex
potential. We propose two PnP algorithms based on the Bregman Score Denoiser
for solving Poisson inverse problems. Extending the convergence results of BPG
in the nonconvex settings, we show that the proposed methods converge,
targeting stationary points of an explicit global functional. Experimental
evaluations conducted on various Poisson inverse problems validate the
convergence results and showcase effective restoration performance.
Related papers
- Efficient Learning of POMDPs with Known Observation Model in Average-Reward Setting [56.92178753201331]
We propose the Observation-Aware Spectral (OAS) estimation technique, which enables the POMDP parameters to be learned from samples collected using a belief-based policy.
We show the consistency of the OAS procedure, and we prove a regret guarantee of order $mathcalO(sqrtT log(T)$ for the proposed OAS-UCRL algorithm.
arXiv Detail & Related papers (2024-10-02T08:46:34Z) - Nonconvex Stochastic Bregman Proximal Gradient Method for Nonconvex Composite Problems [9.202586157819693]
gradient methods for non composite objective functions typically rely on the Lipschitz smoothness of the differentiable part.
We propose a better approximation model that handles non-Lipschitz gradient in non objectives.
We show it achieves optimal robustness in terms of step selection sensitivity.
arXiv Detail & Related papers (2023-06-26T08:54:46Z) - Graph Signal Sampling for Inductive One-Bit Matrix Completion: a
Closed-form Solution [112.3443939502313]
We propose a unified graph signal sampling framework which enjoys the benefits of graph signal analysis and processing.
The key idea is to transform each user's ratings on the items to a function (signal) on the vertices of an item-item graph.
For the online setting, we develop a Bayesian extension, i.e., BGS-IMC which considers continuous random Gaussian noise in the graph Fourier domain.
arXiv Detail & Related papers (2023-02-08T08:17:43Z) - A relaxed proximal gradient descent algorithm for convergent
plug-and-play with proximal denoiser [6.2484576862659065]
This paper presents a new convergent Plug-and-fidelity Descent (Play) algorithm.
The algorithm converges for a wider range of regular convexization parameters, thus allowing more accurate restoration of an image.
arXiv Detail & Related papers (2023-01-31T16:11:47Z) - Regret Bounds for Expected Improvement Algorithms in Gaussian Process
Bandit Optimization [63.8557841188626]
The expected improvement (EI) algorithm is one of the most popular strategies for optimization under uncertainty.
We propose a variant of EI with a standard incumbent defined via the GP predictive mean.
We show that our algorithm converges, and achieves a cumulative regret bound of $mathcal O(gamma_TsqrtT)$.
arXiv Detail & Related papers (2022-03-15T13:17:53Z) - Bregman Plug-and-Play Priors [12.834024732756465]
We present a theoretical convergence result for.
BPGM and demonstrate our algorithms on general linear inverse problems.
We also present a theoretical convergence result for RED-BPGM and demonstrate our algorithms on linear inverse problems.
arXiv Detail & Related papers (2022-02-04T21:00:55Z) - Proximal denoiser for convergent plug-and-play optimization with
nonconvex regularization [7.0226402509856225]
Plug-and-Play () methods solve ill proximal-posed inverse problems through algorithms by replacing a neural network operator by a denoising operator.
We show that this denoiser actually correspond to a gradient function.
arXiv Detail & Related papers (2022-01-31T14:05:20Z) - Misspecified Gaussian Process Bandit Optimization [59.30399661155574]
Kernelized bandit algorithms have shown strong empirical and theoretical performance for this problem.
We introduce a emphmisspecified kernelized bandit setting where the unknown function can be $epsilon$--uniformly approximated by a function with a bounded norm in some Reproducing Kernel Hilbert Space (RKHS)
We show that our algorithm achieves optimal dependence on $epsilon$ with no prior knowledge of misspecification.
arXiv Detail & Related papers (2021-11-09T09:00:02Z) - Bayesian imaging using Plug & Play priors: when Langevin meets Tweedie [13.476505672245603]
This paper develops theory, methods, and provably convergent algorithms for performing Bayesian inference with priors.
We introduce two algorithms: 1) -ULA (Unadjusted Langevin) Algorithm inference for Monte Carlo sampling and MMSE; and 2) quantitative-SGD (Stochastic Gradient Descent) for inference.
The algorithms are demonstrated on several problems such as image denoisering, inpainting, and denoising, where they are used for point estimation as well as for uncertainty visualisation and regularity.
arXiv Detail & Related papers (2021-03-08T12:46:53Z) - Parallel Stochastic Mirror Descent for MDPs [72.75921150912556]
We consider the problem of learning the optimal policy for infinite-horizon Markov decision processes (MDPs)
Some variant of Mirror Descent is proposed for convex programming problems with Lipschitz-continuous functionals.
We analyze this algorithm in a general case and obtain an estimate of the convergence rate that does not accumulate errors during the operation of the method.
arXiv Detail & Related papers (2021-02-27T19:28:39Z) - Private Stochastic Non-Convex Optimization: Adaptive Algorithms and
Tighter Generalization Bounds [72.63031036770425]
We propose differentially private (DP) algorithms for bound non-dimensional optimization.
We demonstrate two popular deep learning methods on the empirical advantages over standard gradient methods.
arXiv Detail & Related papers (2020-06-24T06:01:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.