Revisiting Step-Size Assumptions in Stochastic Approximation
- URL: http://arxiv.org/abs/2405.17834v3
- Date: Fri, 01 Aug 2025 23:52:01 GMT
- Title: Revisiting Step-Size Assumptions in Stochastic Approximation
- Authors: Caio Kalil Lauand, Sean Meyn,
- Abstract summary: It is shown for the first time that this assumption is not required for convergence and finer results.<n>Rates of convergence are obtained for the standard algorithm and for estimates obtained via the averaging technique of Polyak and Ruppert.<n>Results from numerical experiments illustrate that $beta_theta$ may be large due to a combination of multiplicative noise and Markovian memory.
- Score: 1.3654846342364308
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Many machine learning and optimization algorithms are built upon the framework of stochastic approximation (SA), for which the selection of step-size (or learning rate) $\{\alpha_n\}$ is crucial for success. An essential condition for convergence is the assumption that $\sum_n \alpha_n = \infty$. Moreover, in all theory to date it is assumed that $\sum_n \alpha_n^2 < \infty$ (the sequence is square summable). In this paper it is shown for the first time that this assumption is not required for convergence and finer results. The main results are restricted to the special case $\alpha_n = \alpha_0 n^{-\rho}$ with $\rho \in (0,1)$. The theory allows for parameter dependent Markovian noise as found in many applications of interest to the machine learning and optimization research communities. Rates of convergence are obtained for the standard algorithm, and for estimates obtained via the averaging technique of Polyak and Ruppert. $\bullet$ Parameter estimates converge with probability one, and in $L_p$ for any $p\ge 1$. Moreover, the rate of convergence of the the mean-squared error (MSE) is $O(\alpha_n)$, which is improved to $O(\max\{ \alpha_n^2,1/n \})$ with averaging. Finer results are obtained for linear SA: $\bullet$ The covariance of the estimates is optimal in the sense of prior work of Polyak and Ruppert. $\bullet$ Conditions are identified under which the bias decays faster than $O(1/n)$. When these conditions are violated, the bias at iteration $n$ is approximately $\beta_\theta\alpha_n$ for a vector $\beta_\theta$ identified in the paper. Results from numerical experiments illustrate that $\beta_\theta$ may be large due to a combination of multiplicative noise and Markovian memory.
Related papers
- Allocating Variance to Maximize Expectation [2.25491649634702]
We design efficient approximation algorithms for maximizing the expectation of the supremum of families of Gaussian random variables.<n>Such expectation problems occur in diverse applications, ranging from utility auctions to learning mixture models in quantitative genetics.
arXiv Detail & Related papers (2025-02-25T18:59:46Z) - Optimal Sketching for Residual Error Estimation for Matrix and Vector Norms [50.15964512954274]
We study the problem of residual error estimation for matrix and vector norms using a linear sketch.
We demonstrate that this gives a substantial advantage empirically, for roughly the same sketch size and accuracy as in previous work.
We also show an $Omega(k2/pn1-2/p)$ lower bound for the sparse recovery problem, which is tight up to a $mathrmpoly(log n)$ factor.
arXiv Detail & Related papers (2024-08-16T02:33:07Z) - Robust Distribution Learning with Local and Global Adversarial Corruptions [17.22168727622332]
We develop an efficient finite-sample algorithm with error bounded by $sqrtvarepsilon k + rho + tildeO(dsqrtkn-1/(k lor 2))$ when $P$ has bounded covariance.
Our efficient procedure relies on a novel trace norm approximation of an ideal yet intractable 2-Wasserstein projection estimator.
arXiv Detail & Related papers (2024-06-10T17:48:36Z) - On the $O(\frac{\sqrt{d}}{T^{1/4}})$ Convergence Rate of RMSProp and Its Momentum Extension Measured by $\ell_1$ Norm [59.65871549878937]
This paper considers the RMSProp and its momentum extension and establishes the convergence rate of $frac1Tsum_k=1T.
Our convergence rate matches the lower bound with respect to all the coefficients except the dimension $d$.
Our convergence rate can be considered to be analogous to the $frac1Tsum_k=1T.
arXiv Detail & Related papers (2024-02-01T07:21:32Z) - Solving Dense Linear Systems Faster Than via Preconditioning [1.8854491183340518]
We show that our algorithm has an $tilde O(n2)$ when $k=O(n0.729)$.
In particular, our algorithm has an $tilde O(n2)$ when $k=O(n0.729)$.
Our main algorithm can be viewed as a randomized block coordinate descent method.
arXiv Detail & Related papers (2023-12-14T12:53:34Z) - Finite-Sample Symmetric Mean Estimation with Fisher Information Rate [15.802475232604667]
Mean of an unknown variance-$sigma2$ distribution can be estimated from $n$ samples with variance $fracsigma2n$ and nearly corresponding subgaussian rate.
When $f$ is known up to translation, this can be improved to $frac1nmathcal I$, where $mathcal I$ is the Fisher information of the distribution.
arXiv Detail & Related papers (2023-06-28T21:31:46Z) - High-dimensional Location Estimation via Norm Concentration for Subgamma
Vectors [15.802475232604667]
In location estimation, we are given $n$ samples from a known distribution $f$ shifted by an unknown translation $lambda$.
Asymptotically, the maximum likelihood estimate achieves the Cram'er-Rao bound of error $mathcal N(0, frac1nmathcal I)$.
We build on the theory using emphsmoothed estimators to bound the error for finite $n$ in terms of $mathcal I_r$, the Fisher information of the $r$-smoothed
arXiv Detail & Related papers (2023-02-05T22:17:04Z) - A spectral least-squares-type method for heavy-tailed corrupted
regression with unknown covariance \& heterogeneous noise [2.019622939313173]
We revisit heavy-tailed corrupted least-squares linear regression assuming to have a corrupted $n$-sized label-feature sample of at most $epsilon n$ arbitrary outliers.
We propose a near-optimal computationally tractable estimator, based on the power method, assuming no knowledge on $(Sigma,Xi) nor the operator norm of $Xi$.
arXiv Detail & Related papers (2022-09-06T23:37:31Z) - Finite-Sample Maximum Likelihood Estimation of Location [16.44999338864628]
For fixed $f$ the maximum-likelihood estimate (MLE) is well-known to be optimal in the limit as $n to infty$
We show for arbitrary $f$ and $n$ that one can recover a similar theory based on the Fisher information of a smoothed version of $f$, where the smoothing radius decays with $n$.
arXiv Detail & Related papers (2022-06-06T04:33:41Z) - TURF: A Two-factor, Universal, Robust, Fast Distribution Learning
Algorithm [64.13217062232874]
One of its most powerful and successful modalities approximates every distribution to an $ell$ distance essentially at most a constant times larger than its closest $t$-piece degree-$d_$.
We provide a method that estimates this number near-optimally, hence helps approach the best possible approximation.
arXiv Detail & Related papers (2022-02-15T03:49:28Z) - Active Sampling for Linear Regression Beyond the $\ell_2$ Norm [70.49273459706546]
We study active sampling algorithms for linear regression, which aim to query only a small number of entries of a target vector.
We show that this dependence on $d$ is optimal, up to logarithmic factors.
We also provide the first total sensitivity upper bound $O(dmax1,p/2log2 n)$ for loss functions with at most degree $p$ growth.
arXiv Detail & Related papers (2021-11-09T00:20:01Z) - Sparse sketches with small inversion bias [79.77110958547695]
Inversion bias arises when averaging estimates of quantities that depend on the inverse covariance.
We develop a framework for analyzing inversion bias, based on our proposed concept of an $(epsilon,delta)$-unbiased estimator for random matrices.
We show that when the sketching matrix $S$ is dense and has i.i.d. sub-gaussian entries, the estimator $(epsilon,delta)$-unbiased for $(Atop A)-1$ with a sketch of size $m=O(d+sqrt d/
arXiv Detail & Related papers (2020-11-21T01:33:15Z) - Accelerating Optimization and Reinforcement Learning with
Quasi-Stochastic Approximation [2.294014185517203]
This paper sets out to extend convergence theory to quasi-stochastic approximations.
It is illustrated with applications to gradient-free optimization and policy gradient algorithms for reinforcement learning.
arXiv Detail & Related papers (2020-09-30T04:44:45Z) - Model-Free Reinforcement Learning: from Clipped Pseudo-Regret to Sample
Complexity [59.34067736545355]
Given an MDP with $S$ states, $A$ actions, the discount factor $gamma in (0,1)$, and an approximation threshold $epsilon > 0$, we provide a model-free algorithm to learn an $epsilon$-optimal policy.
For small enough $epsilon$, we show an improved algorithm with sample complexity.
arXiv Detail & Related papers (2020-06-06T13:34:41Z) - Maximizing Determinants under Matroid Constraints [69.25768526213689]
We study the problem of finding a basis $S$ of $M$ such that $det(sum_i in Sv_i v_i v_itop)$ is maximized.
This problem appears in a diverse set of areas such as experimental design, fair allocation of goods, network design, and machine learning.
arXiv Detail & Related papers (2020-04-16T19:16:38Z) - Agnostic Q-learning with Function Approximation in Deterministic
Systems: Tight Bounds on Approximation Error and Sample Complexity [94.37110094442136]
We study the problem of agnostic $Q$-learning with function approximation in deterministic systems.
We show that if $delta = Oleft(rho/sqrtdim_Eright)$, then one can find the optimal policy using $Oleft(dim_Eright)$.
arXiv Detail & Related papers (2020-02-17T18:41:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.