Statistical Inference for Polyak-Ruppert Averaged Zeroth-order
Stochastic Gradient Algorithm
- URL: http://arxiv.org/abs/2102.05198v2
- Date: Thu, 11 Feb 2021 21:22:39 GMT
- Title: Statistical Inference for Polyak-Ruppert Averaged Zeroth-order
Stochastic Gradient Algorithm
- Authors: Yanhao Jin, Tesi Xiao, Krishnakumar Balasubramanian
- Abstract summary: In the last decade, estimating or training in several machine learning models has become synonymous with running gradient algorithms.
We first establish a central limit for Polyak-Ruppert averaged gradient algorithm in the zeroth-order setting.
- Score: 10.936043362876651
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning models are deployed in critical applications, it becomes
important to not just provide point estimators of the model parameters (or
subsequent predictions), but also quantify the uncertainty associated with
estimating the model parameters via confidence sets. In the last decade,
estimating or training in several machine learning models has become synonymous
with running stochastic gradient algorithms. However, computing the stochastic
gradients in several settings is highly expensive or even impossible at times.
An important question which has thus far not been addressed sufficiently in the
statistical machine learning literature is that of equipping zeroth-order
stochastic gradient algorithms with practical yet rigorous inferential
capabilities. Towards this, in this work, we first establish a central limit
theorem for Polyak-Ruppert averaged stochastic gradient algorithm in the
zeroth-order setting. We then provide online estimators of the asymptotic
covariance matrix appearing in the central limit theorem, thereby providing a
practical procedure for constructing asymptotically valid confidence sets (or
intervals) for parameter estimation (or prediction) in the zeroth-order
setting.
Related papers
- Accelerated zero-order SGD under high-order smoothness and overparameterized regime [79.85163929026146]
We present a novel gradient-free algorithm to solve convex optimization problems.
Such problems are encountered in medicine, physics, and machine learning.
We provide convergence guarantees for the proposed algorithm under both types of noise.
arXiv Detail & Related papers (2024-11-21T10:26:17Z) - Eliminating Ratio Bias for Gradient-based Simulated Parameter Estimation [0.7673339435080445]
This article addresses the challenge of parameter calibration in models where the likelihood function is not analytically available.
We propose a gradient-based simulated parameter estimation framework, leveraging a multi-time scale that tackles the issue of ratio bias in both maximum likelihood estimation and posterior density estimation problems.
arXiv Detail & Related papers (2024-11-20T02:46:15Z) - Model-Based Reparameterization Policy Gradient Methods: Theory and
Practical Algorithms [88.74308282658133]
Reization (RP) Policy Gradient Methods (PGMs) have been widely adopted for continuous control tasks in robotics and computer graphics.
Recent studies have revealed that, when applied to long-term reinforcement learning problems, model-based RP PGMs may experience chaotic and non-smooth optimization landscapes.
We propose a spectral normalization method to mitigate the exploding variance issue caused by long model unrolls.
arXiv Detail & Related papers (2023-10-30T18:43:21Z) - Stochastic Marginal Likelihood Gradients using Neural Tangent Kernels [78.6096486885658]
We introduce lower bounds to the linearized Laplace approximation of the marginal likelihood.
These bounds are amenable togradient-based optimization and allow to trade off estimation accuracy against computational complexity.
arXiv Detail & Related papers (2023-06-06T19:02:57Z) - Online Learning Under A Separable Stochastic Approximation Framework [20.26530917721778]
We propose an online learning algorithm for a class of machine learning models under a separable approximation framework.
We show that the proposed algorithm produces more robust and test performance when compared to other popular learning algorithms.
arXiv Detail & Related papers (2023-05-12T13:53:03Z) - Learning to Estimate Without Bias [57.82628598276623]
Gauss theorem states that the weighted least squares estimator is a linear minimum variance unbiased estimation (MVUE) in linear models.
In this paper, we take a first step towards extending this result to non linear settings via deep learning with bias constraints.
A second motivation to BCE is in applications where multiple estimates of the same unknown are averaged for improved performance.
arXiv Detail & Related papers (2021-10-24T10:23:51Z) - Fast and Robust Online Inference with Stochastic Gradient Descent via
Random Scaling [0.9806910643086042]
We develop a new method of online inference for a vector of parameters estimated by the Polyak-Rtupper averaging procedure of gradient descent algorithms.
Our approach is fully operational with online data and is rigorously underpinned by a functional central limit theorem.
arXiv Detail & Related papers (2021-06-06T15:38:37Z) - Storchastic: A Framework for General Stochastic Automatic
Differentiation [9.34612743192798]
We introduce Storchastic, a new framework for automatic differentiation of graphs.
Storchastic allows the modeler to choose from a wide variety of gradient estimation methods at each sampling step.
Storchastic is provably unbiased for estimation of any-order gradients, and generalizes variance reduction techniques to higher-order gradient estimates.
arXiv Detail & Related papers (2021-04-01T12:19:54Z) - Path Sample-Analytic Gradient Estimators for Stochastic Binary Networks [78.76880041670904]
In neural networks with binary activations and or binary weights the training by gradient descent is complicated.
We propose a new method for this estimation problem combining sampling and analytic approximation steps.
We experimentally show higher accuracy in gradient estimation and demonstrate a more stable and better performing training in deep convolutional models.
arXiv Detail & Related papers (2020-06-04T21:51:21Z) - Instability, Computational Efficiency and Statistical Accuracy [101.32305022521024]
We develop a framework that yields statistical accuracy based on interplay between the deterministic convergence rate of the algorithm at the population level, and its degree of (instability) when applied to an empirical object based on $n$ samples.
We provide applications of our general results to several concrete classes of models, including Gaussian mixture estimation, non-linear regression models, and informative non-response models.
arXiv Detail & Related papers (2020-05-22T22:30:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.