A Retrospective Approximation Approach for Smooth Stochastic
Optimization
- URL: http://arxiv.org/abs/2103.04392v3
- Date: Wed, 6 Mar 2024 23:17:04 GMT
- Title: A Retrospective Approximation Approach for Smooth Stochastic
Optimization
- Authors: David Newton, Raghu Bollapragada, Raghu Pasupathy, Nung Kwan Yip
- Abstract summary: Gradient (SG) is the defactorandom iterative technique to solve optimization (SO) problems with a smooth (non-fimation) objective $imation.
- Score: 0.2867517731896504
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Stochastic Gradient (SG) is the defacto iterative technique to solve
stochastic optimization (SO) problems with a smooth (non-convex) objective $f$
and a stochastic first-order oracle. SG's attractiveness is due in part to its
simplicity of executing a single step along the negative subsampled gradient
direction to update the incumbent iterate. In this paper, we question SG's
choice of executing a single step as opposed to multiple steps between
subsample updates. Our investigation leads naturally to generalizing SG into
Retrospective Approximation (RA) where, during each iteration, a "deterministic
solver" executes possibly multiple steps on a subsampled deterministic problem
and stops when further solving is deemed unnecessary from the standpoint of
statistical efficiency. RA thus rigorizes what is appealing for implementation
-- during each iteration, "plug in" a solver, e.g., L-BFGS line search or
Newton-CG, as is, and solve only to the extent necessary. We develop a complete
theory using relative error of the observed gradients as the principal object,
demonstrating that almost sure and $L_1$ consistency of RA are preserved under
especially weak conditions when sample sizes are increased at appropriate
rates. We also characterize the iteration and oracle complexity (for linear and
sub-linear solvers) of RA, and identify a practical termination criterion
leading to optimal complexity rates. To subsume non-convex $f$, we present a
certain "random central limit theorem" that incorporates the effect of
curvature across all first-order critical points, demonstrating that the
asymptotic behavior is described by a certain mixture of normals. The message
from our numerical experiments is that the ability of RA to incorporate
existing second-order deterministic solvers in a strategic manner might be
important from the standpoint of dispensing with hyper-parameter tuning.
Related papers
- A Sample Efficient Alternating Minimization-based Algorithm For Robust Phase Retrieval [56.67706781191521]
In this work, we present a robust phase retrieval problem where the task is to recover an unknown signal.
Our proposed oracle avoids the need for computationally spectral descent, using a simple gradient step and outliers.
arXiv Detail & Related papers (2024-09-07T06:37:23Z) - Accelerated stochastic approximation with state-dependent noise [7.4648480208501455]
We consider a class of smooth convex optimization problems under general assumptions on the quadratic noise in the gradient observation.
Such problems naturally arise in a variety of applications, in particular, in the well-known generalized linear regression problem in statistics.
We show that both SAGD and SGE, under appropriate conditions, achieve the optimal convergence rate.
arXiv Detail & Related papers (2023-07-04T06:06:10Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Stochastic Mirror Descent for Large-Scale Sparse Recovery [13.500750042707407]
We discuss an application of quadratic Approximation to statistical estimation of high-dimensional sparse parameters.
We show that the proposed algorithm attains the optimal convergence of the estimation error under weak assumptions on the regressor distribution.
arXiv Detail & Related papers (2022-10-23T23:23:23Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - High Probability Complexity Bounds for Non-Smooth Stochastic Optimization with Heavy-Tailed Noise [51.31435087414348]
It is essential to theoretically guarantee that algorithms provide small objective residual with high probability.
Existing methods for non-smooth convex optimization have complexity bounds with dependence on confidence level.
We propose novel stepsize rules for two methods with gradient clipping.
arXiv Detail & Related papers (2021-06-10T17:54:21Z) - Stochastic Compositional Gradient Descent under Compositional
constraints [13.170519806372075]
We study constrained optimization problems where the objective and constraint functions are convex and expressed as compositions of functions.
The problem arises in fair classification/regression and in the design of queuing systems.
We prove that the proposed algorithm is guaranteed to find the optimal and feasible solution almost surely.
arXiv Detail & Related papers (2020-12-17T05:38:37Z) - Stochastic optimization with momentum: convergence, fluctuations, and
traps avoidance [0.0]
In this paper, a general optimization procedure is studied, unifying several variants of the gradient descent such as, among others, the heavy ball method, the Nesterov Accelerated Gradient (S-NAG), and the widely used Adam method.
The avoidance is studied as a noisy discretization of a non-autonomous ordinary differential equation.
arXiv Detail & Related papers (2020-12-07T19:14:49Z) - ROOT-SGD: Sharp Nonasymptotics and Near-Optimal Asymptotics in a Single Algorithm [71.13558000599839]
We study the problem of solving strongly convex and smooth unconstrained optimization problems using first-order algorithms.
We devise a novel, referred to as Recursive One-Over-T SGD, based on an easily implementable, averaging of past gradients.
We prove that it simultaneously achieves state-of-the-art performance in both a finite-sample, nonasymptotic sense and an sense.
arXiv Detail & Related papers (2020-08-28T14:46:56Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.