Fast Convergence of Random Reshuffling under Over-Parameterization and
the Polyak-\L ojasiewicz Condition
- URL: http://arxiv.org/abs/2304.00459v1
- Date: Sun, 2 Apr 2023 06:00:29 GMT
- Title: Fast Convergence of Random Reshuffling under Over-Parameterization and
the Polyak-\L ojasiewicz Condition
- Authors: Chen Fan, Christos Thrampoulidis, Mark Schmidt
- Abstract summary: We study the convergence properties of a sampling-without- gradient descent (SGD) known as random reshuffling (RR)
RR chooses a random permutation of data at the beginning of each epoch and each chooses the next sample from the permutation.
- Score: 41.58274719943315
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern machine learning models are often over-parameterized and as a result
they can interpolate the training data. Under such a scenario, we study the
convergence properties of a sampling-without-replacement variant of stochastic
gradient descent (SGD) known as random reshuffling (RR). Unlike SGD that
samples data with replacement at every iteration, RR chooses a random
permutation of data at the beginning of each epoch and each iteration chooses
the next sample from the permutation. For under-parameterized models, it has
been shown RR can converge faster than SGD under certain assumptions. However,
previous works do not show that RR outperforms SGD in over-parameterized
settings except in some highly-restrictive scenarios. For the class of
Polyak-\L ojasiewicz (PL) functions, we show that RR can outperform SGD in
over-parameterized settings when either one of the following holds: (i) the
number of samples ($n$) is less than the product of the condition number
($\kappa$) and the parameter ($\alpha$) of a weak growth condition (WGC), or
(ii) $n$ is less than the parameter ($\rho$) of a strong growth condition
(SGC).
Related papers
- Estimation of Toeplitz Covariance Matrices using Overparameterized Gradient Descent [1.7188280334580195]
We revisit Toeplitz covariance estimation through the lens of simple descent (GD)<n>We show that when $K = P$, GD may converge to suboptimal solutions.<n>We propose an accelerated GD variant with separate learning rates for amplitudes and frequencies.
arXiv Detail & Related papers (2025-11-03T14:07:53Z) - New Bounds for Sparse Variational Gaussian Processes [8.122270502556374]
Sparse variational Gaussian processes (GPs) construct tractable posterior approximations to GP models.
At the core of these methods is the assumption that the true posterior distribution over training function values $bf f$ and inducing variables $bf u$ is approximated by a variational distribution that incorporates the conditional GP prior $p(bf f | bf u)$ in its factorization.
We show that for model training we can relax it through the use of a more general variational distribution $q(bf f | bf u)$
arXiv Detail & Related papers (2025-02-12T19:04:26Z) - Effect of Random Learning Rate: Theoretical Analysis of SGD Dynamics in Non-Convex Optimization via Stationary Distribution [6.144680854063938]
We consider a variant of the gradient descent (SGD) with a random learning rate to reveal its convergence properties.
We demonstrate that a distribution of a parameter updated by Poisson SGD converges to a stationary distribution under weak assumptions.
arXiv Detail & Related papers (2024-06-23T06:52:33Z) - Asymptotics of Random Feature Regression Beyond the Linear Scaling
Regime [22.666759017118796]
Recent advances in machine learning have been achieved by using overparametrized models trained until near the training data.
How does model complexity and generalization depend on the number of parameters $p$?
In particular, RFRR exhibits an intuitive trade-off between approximation and generalization power.
arXiv Detail & Related papers (2024-03-13T00:59:25Z) - On Convergence of Incremental Gradient for Non-Convex Smooth Functions [63.51187646914962]
In machine learning and network optimization, algorithms like shuffle SGD are popular due to minimizing the number of misses and good cache.
This paper delves into the convergence properties SGD algorithms with arbitrary data ordering.
arXiv Detail & Related papers (2023-05-30T17:47:27Z) - Preconditioned Score-based Generative Models [49.88840603798831]
An intuitive acceleration method is to reduce the sampling iterations which however causes severe performance degradation.
We propose a model-agnostic bfem preconditioned diffusion sampling (PDS) method that leverages matrix preconditioning to alleviate the aforementioned problem.
PDS alters the sampling process of a vanilla SGM at marginal extra computation cost, and without model retraining.
arXiv Detail & Related papers (2023-02-13T16:30:53Z) - Adaptive Sketches for Robust Regression with Importance Sampling [64.75899469557272]
We introduce data structures for solving robust regression through gradient descent (SGD)
Our algorithm effectively runs $T$ steps of SGD with importance sampling while using sublinear space and just making a single pass over the data.
arXiv Detail & Related papers (2022-07-16T03:09:30Z) - Characterizing & Finding Good Data Orderings for Fast Convergence of
Sequential Gradient Methods [0.0]
We quantify the effect of order on convergence speed, obtaining convergence bounds based on the chosen sequence of permutations.
We develop a greedy algorithm for choosing good orders during training, achieving superior performance (by more than 14 percent in accuracy) over RR.
arXiv Detail & Related papers (2022-02-03T20:38:42Z) - Faster Convergence of Local SGD for Over-Parameterized Models [1.5504102675587357]
Modern machine learning architectures are often highly expressive.
We analyze the convergence of Local SGD (or FedAvg) for such over-parameterized functions in heterogeneous data setting.
For general convex loss functions, we establish an error bound $O(K/T)$ otherwise.
For non-loss functions, we prove an error bound $O(K/T)$ in both cases.
We complete our results by providing problem instances in which our established convergence rates are tight to a constant factor with a reasonably small stepsize.
arXiv Detail & Related papers (2022-01-30T04:05:56Z) - Gaussian Process Inference Using Mini-batch Stochastic Gradient Descent:
Convergence Guarantees and Empirical Benefits [21.353189917487512]
gradient descent (SGD) and its variants have established themselves as the go-to algorithms for machine learning problems.
We take a step forward by proving minibatch SGD converges to a critical point of the full log-likelihood loss function.
Our theoretical guarantees hold provided that the kernel functions exhibit exponential or eigendecay.
arXiv Detail & Related papers (2021-11-19T22:28:47Z) - Minibatch vs Local SGD with Shuffling: Tight Convergence Bounds and
Beyond [63.59034509960994]
We study shuffling-based variants: minibatch and local Random Reshuffling, which draw gradients without replacement.
For smooth functions satisfying the Polyak-Lojasiewicz condition, we obtain convergence bounds which show that these shuffling-based variants converge faster than their with-replacement counterparts.
We propose an algorithmic modification called synchronized shuffling that leads to convergence rates faster than our lower bounds in near-homogeneous settings.
arXiv Detail & Related papers (2021-10-20T02:25:25Z) - On the Double Descent of Random Features Models Trained with SGD [78.0918823643911]
We study properties of random features (RF) regression in high dimensions optimized by gradient descent (SGD)
We derive precise non-asymptotic error bounds of RF regression under both constant and adaptive step-size SGD setting.
We observe the double descent phenomenon both theoretically and empirically.
arXiv Detail & Related papers (2021-10-13T17:47:39Z) - The Benefits of Implicit Regularization from SGD in Least Squares
Problems [116.85246178212616]
gradient descent (SGD) exhibits strong algorithmic regularization effects in practice.
We make comparisons of the implicit regularization afforded by (unregularized) average SGD with the explicit regularization of ridge regression.
arXiv Detail & Related papers (2021-08-10T09:56:47Z) - Optimal Robust Linear Regression in Nearly Linear Time [97.11565882347772]
We study the problem of high-dimensional robust linear regression where a learner is given access to $n$ samples from the generative model $Y = langle X,w* rangle + epsilon$
We propose estimators for this problem under two settings: (i) $X$ is L4-L2 hypercontractive, $mathbbE [XXtop]$ has bounded condition number and $epsilon$ has bounded variance and (ii) $X$ is sub-Gaussian with identity second moment and $epsilon$ is
arXiv Detail & Related papers (2020-07-16T06:44:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.