Inexact and Stochastic Generalized Conditional Gradient with Augmented
Lagrangian and Proximal Step
- URL: http://arxiv.org/abs/2005.05158v1
- Date: Mon, 11 May 2020 14:52:16 GMT
- Title: Inexact and Stochastic Generalized Conditional Gradient with Augmented
Lagrangian and Proximal Step
- Authors: Antonio Silveti-Falls, Cesare Molinari, Jalal Fadili
- Abstract summary: We analyze inexact and versions of the CGALP algorithm developed in the authors' previous paper.
This allows one to compute some gradients, terms, and/or linear minimization oracles in an inexact fashion.
We show convergence of the Lagrangian to an optimum and feasibility of the affine constraint.
- Score: 2.0196229393131726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper we propose and analyze inexact and stochastic versions of the
CGALP algorithm developed in the authors' previous paper, which we denote
ICGALP, that allows for errors in the computation of several important
quantities. In particular this allows one to compute some gradients, proximal
terms, and/or linear minimization oracles in an inexact fashion that
facilitates the practical application of the algorithm to computationally
intensive settings, e.g. in high (or possibly infinite) dimensional Hilbert
spaces commonly found in machine learning problems. The algorithm is able to
solve composite minimization problems involving the sum of three convex proper
lower-semicontinuous functions subject to an affine constraint of the form
$Ax=b$ for some bounded linear operator $A$. Only one of the functions in the
objective is assumed to be differentiable, the other two are assumed to have an
accessible prox operator and a linear minimization oracle. As main results, we
show convergence of the Lagrangian to an optimum and asymptotic feasibility of
the affine constraint as well as weak convergence of the dual variable to a
solution of the dual problem, all in an almost sure sense. Almost sure
convergence rates, both pointwise and ergodic, are given for the Lagrangian
values and the feasibility gap. Numerical experiments verifying the predicted
rates of convergence are shown as well.
Related papers
- Obtaining Lower Query Complexities through Lightweight Zeroth-Order Proximal Gradient Algorithms [65.42376001308064]
We propose two variance reduced ZO estimators for complex gradient problems.
We improve the state-of-the-art function complexities from $mathcalOleft(minfracdn1/2epsilon2, fracdepsilon3right)$ to $tildecalOleft(fracdepsilon2right)$.
arXiv Detail & Related papers (2024-10-03T15:04:01Z) - Stochastic Optimization for Non-convex Problem with Inexact Hessian
Matrix, Gradient, and Function [99.31457740916815]
Trust-region (TR) and adaptive regularization using cubics have proven to have some very appealing theoretical properties.
We show that TR and ARC methods can simultaneously provide inexact computations of the Hessian, gradient, and function values.
arXiv Detail & Related papers (2023-10-18T10:29:58Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - High-Probability Bounds for Stochastic Optimization and Variational
Inequalities: the Case of Unbounded Variance [59.211456992422136]
We propose algorithms with high-probability convergence results under less restrictive assumptions.
These results justify the usage of the considered methods for solving problems that do not fit standard functional classes in optimization.
arXiv Detail & Related papers (2023-02-02T10:37:23Z) - On efficient algorithms for computing near-best polynomial
approximations to high-dimensional, Hilbert-valued functions from limited
samples [1.0650780147044159]
Sparse approximation has become indispensable for approxing smooth, high-dimensional functions from limited samples.
This paper introduces algorithms and theoretical guarantees that assert exponential or algebraic rates of convergence, along with robustness to sampling, algorithmic and physical discretization errors.
arXiv Detail & Related papers (2022-03-25T20:56:07Z) - Sharper Bounds for Proximal Gradient Algorithms with Errors [6.901159341430919]
We analyse the convergence of the proximal gradient algorithm for convex composite problems in the presence of gradient and proximal computational inaccuracies.
We derive new tighter deterministic and probabilistic bounds that we use to verify a simulated (MPC) and a synthetic (LASSO) optimization problems solved on a reduced-precision machine.
arXiv Detail & Related papers (2022-03-04T09:27:08Z) - A Stochastic Composite Augmented Lagrangian Method For Reinforcement
Learning [9.204659134755795]
We consider the linear programming (LP) formulation for deep reinforcement learning.
The augmented Lagrangian method suffers the double-sampling obstacle in solving the LP.
A deep parameterized augment Lagrangian method is proposed.
arXiv Detail & Related papers (2021-05-20T13:08:06Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Conditional gradient methods for stochastically constrained convex
minimization [54.53786593679331]
We propose two novel conditional gradient-based methods for solving structured convex optimization problems.
The most important feature of our framework is that only a subset of the constraints is processed at each iteration.
Our algorithms rely on variance reduction and smoothing used in conjunction with conditional gradient steps, and are accompanied by rigorous convergence guarantees.
arXiv Detail & Related papers (2020-07-07T21:26:35Z) - Provably Convergent Working Set Algorithm for Non-Convex Regularized
Regression [0.0]
This paper proposes a working set algorithm for non-regular regularizers with convergence guarantees.
Our results demonstrate high gain compared to the full problem solver for both block-coordinates or a gradient solver.
arXiv Detail & Related papers (2020-06-24T07:40:31Z) - Efficient algorithms for multivariate shape-constrained convex
regression problems [9.281671380673306]
We prove that the least squares estimator is computable via solving a constrained convex programming (QP) problem with $(n+1)d$ variables and at least $n(n-1)$ linear inequality constraints.
For solving the generally very large-scale convex QP, we design two efficient algorithms, one is the symmetric Gauss-Seidel based alternating direction method of multipliers (tt sGS-ADMM), and the other is the proximal augmented Lagrangian method (tt pALM) with the subproblems solved by the semismooth Newton method (t
arXiv Detail & Related papers (2020-02-26T11:18:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.