Online stochastic gradient descent on non-convex losses from
high-dimensional inference
- URL: http://arxiv.org/abs/2003.10409v4
- Date: Mon, 10 May 2021 17:56:25 GMT
- Title: Online stochastic gradient descent on non-convex losses from
high-dimensional inference
- Authors: Gerard Ben Arous, Reza Gheissari, Aukosh Jagannath
- Abstract summary: gradient descent (SGD) is a popular algorithm for optimization problems in high-dimensional tasks.
In this paper we produce an estimator of non-trivial correlation from data.
We illustrate our approach by applying it to a set of tasks such as phase retrieval, and estimation for generalized models.
- Score: 2.2344764434954256
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stochastic gradient descent (SGD) is a popular algorithm for optimization
problems arising in high-dimensional inference tasks. Here one produces an
estimator of an unknown parameter from independent samples of data by
iteratively optimizing a loss function. This loss function is random and often
non-convex. We study the performance of the simplest version of SGD, namely
online SGD, from a random start in the setting where the parameter space is
high-dimensional.
We develop nearly sharp thresholds for the number of samples needed for
consistent estimation as one varies the dimension. Our thresholds depend only
on an intrinsic property of the population loss which we call the information
exponent. In particular, our results do not assume uniform control on the loss
itself, such as convexity or uniform derivative bounds. The thresholds we
obtain are polynomial in the dimension and the precise exponent depends
explicitly on the information exponent. As a consequence of our results, we
find that except for the simplest tasks, almost all of the data is used simply
in the initial search phase to obtain non-trivial correlation with the ground
truth. Upon attaining non-trivial correlation, the descent is rapid and
exhibits law of large numbers type behavior.
We illustrate our approach by applying it to a wide set of inference tasks
such as phase retrieval, and parameter estimation for generalized linear
models, online PCA, and spiked tensor models, as well as to supervised learning
for single-layer networks with general activation functions.
Related papers
- Privacy of the last iterate in cyclically-sampled DP-SGD on nonconvex composite losses [2.532202013576547]
Differentially-private gradients (DP-SGD) is a family of iterative machine learning algorithms that iterate to generate a sequence of differentially-private (DP) model parameters.
Last gradientrate accounting is challenging, and existing works require strong assumptions not satisfied by most implementations.
We provide new Renyi differential privacy (R) upper bounds for the last iterate under realistic assumptions of small stepsize and Lipschitz smoothness of the loss function.
arXiv Detail & Related papers (2024-07-07T02:35:55Z) - Gradient-Based Feature Learning under Structured Data [57.76552698981579]
In the anisotropic setting, the commonly used spherical gradient dynamics may fail to recover the true direction.
We show that appropriate weight normalization that is reminiscent of batch normalization can alleviate this issue.
In particular, under the spiked model with a suitably large spike, the sample complexity of gradient-based training can be made independent of the information exponent.
arXiv Detail & Related papers (2023-09-07T16:55:50Z) - Fast Convergence in Learning Two-Layer Neural Networks with Separable
Data [37.908159361149835]
We study normalized gradient descent on two-layer neural nets.
We prove for exponentially-tailed losses that using normalized GD leads to linear rate of convergence of the training loss to the global optimum.
arXiv Detail & Related papers (2023-05-22T20:30:10Z) - On High dimensional Poisson models with measurement error: hypothesis
testing for nonlinear nonconvex optimization [13.369004892264146]
We estimation and testing regression model with high dimensionals, which has wide applications in analyzing data.
We propose to estimate regression parameter through minimizing penalized consistency.
The proposed method is applied to the Alzheimer's Disease Initiative.
arXiv Detail & Related papers (2022-12-31T06:58:42Z) - Instance-Dependent Generalization Bounds via Optimal Transport [51.71650746285469]
Existing generalization bounds fail to explain crucial factors that drive the generalization of modern neural networks.
We derive instance-dependent generalization bounds that depend on the local Lipschitz regularity of the learned prediction function in the data space.
We empirically analyze our generalization bounds for neural networks, showing that the bound values are meaningful and capture the effect of popular regularization methods during training.
arXiv Detail & Related papers (2022-11-02T16:39:42Z) - Adaptive Sketches for Robust Regression with Importance Sampling [64.75899469557272]
We introduce data structures for solving robust regression through gradient descent (SGD)
Our algorithm effectively runs $T$ steps of SGD with importance sampling while using sublinear space and just making a single pass over the data.
arXiv Detail & Related papers (2022-07-16T03:09:30Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - Benign Overfitting of Constant-Stepsize SGD for Linear Regression [122.70478935214128]
inductive biases are central in preventing overfitting empirically.
This work considers this issue in arguably the most basic setting: constant-stepsize SGD for linear regression.
We reflect on a number of notable differences between the algorithmic regularization afforded by (unregularized) SGD in comparison to ordinary least squares.
arXiv Detail & Related papers (2021-03-23T17:15:53Z) - Direction Matters: On the Implicit Bias of Stochastic Gradient Descent
with Moderate Learning Rate [105.62979485062756]
This paper attempts to characterize the particular regularization effect of SGD in the moderate learning rate regime.
We show that SGD converges along the large eigenvalue directions of the data matrix, while GD goes after the small eigenvalue directions.
arXiv Detail & Related papers (2020-11-04T21:07:52Z) - Fundamental Limits of Ridge-Regularized Empirical Risk Minimization in
High Dimensions [41.7567932118769]
Empirical Risk Minimization algorithms are widely used in a variety of estimation and prediction tasks.
In this paper, we characterize for the first time the fundamental limits on the statistical accuracy of convex ERM for inference.
arXiv Detail & Related papers (2020-06-16T04:27:38Z) - Statistical Inference for Model Parameters in Stochastic Gradient
Descent [45.29532403359099]
gradient descent coefficients (SGD) has been widely used in statistical estimation for large-scale data due to its computational and memory efficiency.
We investigate the problem of statistical inference of true model parameters based on SGD when the population loss function is strongly convex and satisfies certain conditions.
arXiv Detail & Related papers (2016-10-27T07:04:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.