On the Convergence Rate of Projected Gradient Descent for a
Back-Projection based Objective
- URL: http://arxiv.org/abs/2005.00959v3
- Date: Sun, 8 Aug 2021 12:40:58 GMT
- Title: On the Convergence Rate of Projected Gradient Descent for a
Back-Projection based Objective
- Authors: Tom Tirer, Raja Giryes
- Abstract summary: We consider a back-projection based fidelity term as an alternative to the common least squares (LS)
We show that using the BP term, rather than the LS term, requires fewer iterations of optimization algorithms.
- Score: 58.33065918353532
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Ill-posed linear inverse problems appear in many scientific setups, and are
typically addressed by solving optimization problems, which are composed of
data fidelity and prior terms. Recently, several works have considered a
back-projection (BP) based fidelity term as an alternative to the common least
squares (LS), and demonstrated excellent results for popular inverse problems.
These works have also empirically shown that using the BP term, rather than the
LS term, requires fewer iterations of optimization algorithms. In this paper,
we examine the convergence rate of the projected gradient descent (PGD)
algorithm for the BP objective. Our analysis allows to identify an inherent
source for its faster convergence compared to using the LS objective, while
making only mild assumptions. We also analyze the more general proximal
gradient method under a relaxed contraction condition on the proximal mapping
of the prior. This analysis further highlights the advantage of BP when the
linear measurement operator is badly conditioned. Numerical experiments with
both $\ell_1$-norm and GAN-based priors corroborate our theoretical results.
Related papers
- Generalized Eigenvalue Problems with Generative Priors [23.711322973038897]
Generalized eigenvalue problems (GEPs) find applications in various fields of science and engineering.
We study GEPs under generative priors, assuming that the underlying leading generalized eigenvector lies within the range of a Lipschitz continuous generative model.
We propose an iterative algorithm called the Projected Rayleigh Flow Method (PRFM) to approximate the optimal solution.
arXiv Detail & Related papers (2024-11-02T18:16:07Z) - Differentially Private Optimization with Sparse Gradients [60.853074897282625]
We study differentially private (DP) optimization problems under sparsity of individual gradients.
Building on this, we obtain pure- and approximate-DP algorithms with almost optimal rates for convex optimization with sparse gradients.
arXiv Detail & Related papers (2024-04-16T20:01:10Z) - Stable Nonconvex-Nonconcave Training via Linear Interpolation [51.668052890249726]
This paper presents a theoretical analysis of linearahead as a principled method for stabilizing (large-scale) neural network training.
We argue that instabilities in the optimization process are often caused by the nonmonotonicity of the loss landscape and show how linear can help by leveraging the theory of nonexpansive operators.
arXiv Detail & Related papers (2023-10-20T12:45:12Z) - Asymptotically Unbiased Instance-wise Regularized Partial AUC
Optimization: Theory and Algorithm [101.44676036551537]
One-way Partial AUC (OPAUC) and Two-way Partial AUC (TPAUC) measures the average performance of a binary classifier.
Most of the existing methods could only optimize PAUC approximately, leading to inevitable biases that are not controllable.
We present a simpler reformulation of the PAUC problem via distributional robust optimization AUC.
arXiv Detail & Related papers (2022-10-08T08:26:22Z) - A general sample complexity analysis of vanilla policy gradient [101.16957584135767]
Policy gradient (PG) is one of the most popular reinforcement learning (RL) problems.
"vanilla" theoretical understanding of PG trajectory is one of the most popular methods for solving RL problems.
arXiv Detail & Related papers (2021-07-23T19:38:17Z) - The Role of Momentum Parameters in the Optimal Convergence of Adaptive
Polyak's Heavy-ball Methods [12.93796690939018]
We prove that the adaptive Polyak's Heavy-ball (HB) method attains an optimal individual convergence rate of $O(frac1sqrtt)$.
Our new analysis shows how the HB momentum and its time-varying weight help us to achieve the acceleration in convex optimization.
arXiv Detail & Related papers (2021-02-15T02:57:14Z) - Zeroth-Order Hybrid Gradient Descent: Towards A Principled Black-Box
Optimization Framework [100.36569795440889]
This work is on the iteration of zero-th-order (ZO) optimization which does not require first-order information.
We show that with a graceful design in coordinate importance sampling, the proposed ZO optimization method is efficient both in terms of complexity as well as as function query cost.
arXiv Detail & Related papers (2020-12-21T17:29:58Z) - Wasserstein-based Projections with Applications to Inverse Problems [25.517805029337254]
In problems consist of recovering a signal gap from a collection of noisy measurements.
Recent Plug-and-Play works propose replacing an operator for analytic regularization in optimization methods by a data-driven denoiser.
We present a new algorithm that takes samples from the manifold of true data as input and outputs an approximation of the projection operator onto this manifold.
arXiv Detail & Related papers (2020-08-05T15:58:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.