SketchySGD: Reliable Stochastic Optimization via Randomized Curvature
Estimates
- URL: http://arxiv.org/abs/2211.08597v5
- Date: Tue, 20 Feb 2024 21:06:07 GMT
- Title: SketchySGD: Reliable Stochastic Optimization via Randomized Curvature
Estimates
- Authors: Zachary Frangella, Pratik Rathore, Shipu Zhao, Madeleine Udell
- Abstract summary: SketchySGD improves upon existing gradient methods in machine learning by using randomized low-rank approximations to the subsampled Hessian.
We show theoretically that SketchySGD with a fixed stepsize converges linearly to a small ball around the optimum.
In the ill-conditioned setting we show SketchySGD converges at a faster rate than SGD for least-squares problems.
- Score: 19.420605210427635
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: SketchySGD improves upon existing stochastic gradient methods in machine
learning by using randomized low-rank approximations to the subsampled Hessian
and by introducing an automated stepsize that works well across a wide range of
convex machine learning problems. We show theoretically that SketchySGD with a
fixed stepsize converges linearly to a small ball around the optimum. Further,
in the ill-conditioned setting we show SketchySGD converges at a faster rate
than SGD for least-squares problems. We validate this improvement empirically
with ridge regression experiments on real data. Numerical experiments on both
ridge and logistic regression problems with dense and sparse data, show that
SketchySGD equipped with its default hyperparameters can achieve comparable or
better results than popular stochastic gradient methods, even when they have
been tuned to yield their best performance. In particular, SketchySGD is able
to solve an ill-conditioned logistic regression problem with a data matrix that
takes more than $840$GB RAM to store, while its competitors, even when tuned,
are unable to make any progress. SketchySGD's ability to work out-of-the box
with its default hyperparameters and excel on ill-conditioned problems is an
advantage over other stochastic gradient methods, most of which require careful
hyperparameter tuning (especially of the learning rate) to obtain good
performance and degrade in the presence of ill-conditioning.
Related papers
- Max-affine regression via first-order methods [7.12511675782289]
The max-affine model ubiquitously arises in applications in signal processing and statistics.
We present a non-asymptotic convergence analysis of gradient descent (GD) and mini-batch gradient descent (SGD) for max-affine regression.
arXiv Detail & Related papers (2023-08-15T23:46:44Z) - Constrained Optimization via Exact Augmented Lagrangian and Randomized
Iterative Sketching [55.28394191394675]
We develop an adaptive inexact Newton method for equality-constrained nonlinear, nonIBS optimization problems.
We demonstrate the superior performance of our method on benchmark nonlinear problems, constrained logistic regression with data from LVM, and a PDE-constrained problem.
arXiv Detail & Related papers (2023-05-28T06:33:37Z) - Stochastic Inexact Augmented Lagrangian Method for Nonconvex Expectation
Constrained Optimization [88.0031283949404]
Many real-world problems have complicated non functional constraints and use a large number of data points.
Our proposed method outperforms an existing method with the previously best-known result.
arXiv Detail & Related papers (2022-12-19T14:48:54Z) - An adaptive Hessian approximated stochastic gradient MCMC method [12.93317525451798]
We present an adaptive Hessian approximated gradient MCMC method to incorporate local geometric information while sampling from the posterior.
We adopt a magnitude-based weight pruning method to enforce the sparsity of the network.
arXiv Detail & Related papers (2020-10-03T16:22:15Z) - Balancing Rates and Variance via Adaptive Batch-Size for Stochastic
Optimization Problems [120.21685755278509]
In this work, we seek to balance the fact that attenuating step-size is required for exact convergence with the fact that constant step-size learns faster in time up to an error.
Rather than fixing the minibatch the step-size at the outset, we propose to allow parameters to evolve adaptively.
arXiv Detail & Related papers (2020-07-02T16:02:02Z) - Bayesian Sparse learning with preconditioned stochastic gradient MCMC
and its applications [5.660384137948734]
The proposed algorithm converges to the correct distribution with a controllable bias under mild conditions.
We show that the proposed algorithm canally converge to the correct distribution with a controllable bias under mild conditions.
arXiv Detail & Related papers (2020-06-29T20:57:20Z) - Least Squares Regression with Markovian Data: Fundamental Limits and
Algorithms [69.45237691598774]
We study the problem of least squares linear regression where the data-points are dependent and are sampled from a Markov chain.
We establish sharp information theoretic minimax lower bounds for this problem in terms of $tau_mathsfmix$.
We propose an algorithm based on experience replay--a popular reinforcement learning technique--that achieves a significantly better error rate.
arXiv Detail & Related papers (2020-06-16T04:26:50Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z) - Stochastic Polyak Step-size for SGD: An Adaptive Learning Rate for Fast
Convergence [30.393999722555154]
We propose a variant of the classical Polyak step-size (Polyak, 1987) commonly used in the subgradient method.
The proposed Polyak step-size (SPS) is an attractive choice for setting the learning rate for gradient descent.
arXiv Detail & Related papers (2020-02-24T20:57:23Z) - Implicit differentiation of Lasso-type models for hyperparameter
optimization [82.73138686390514]
We introduce an efficient implicit differentiation algorithm, without matrix inversion, tailored for Lasso-type problems.
Our approach scales to high-dimensional data by leveraging the sparsity of the solutions.
arXiv Detail & Related papers (2020-02-20T18:43:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.