Deviation inequalities for stochastic approximation by averaging
- URL: http://arxiv.org/abs/2102.08685v1
- Date: Wed, 17 Feb 2021 10:57:37 GMT
- Title: Deviation inequalities for stochastic approximation by averaging
- Authors: Xiequan Fan, Pierre Alquier, Paul Doukhan
- Abstract summary: We introduce a class of Markov chains, that contains the model of approximation by averaging and non-averaging.
We establish various deviation inequalities for separately Lipschitz functions of such a chain, with different moment conditions on some dominating random variables of martingale differences.
- Score: 0.5586191108738562
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We introduce a class of Markov chains, that contains the model of stochastic
approximation by averaging and non-averaging. Using martingale approximation
method, we establish various deviation inequalities for separately Lipschitz
functions of such a chain, with different moment conditions on some dominating
random variables of martingale differences.Finally, we apply these inequalities
to the stochastic approximation by averaging.
Related papers
- Generalizing Stochastic Smoothing for Differentiation and Gradient Estimation [59.86921150579892]
We deal with the problem of gradient estimation for differentiable relaxations of algorithms, operators, simulators, and other non-differentiable functions.
We develop variance reduction strategies for differentiable sorting and ranking, differentiable shortest-paths on graphs, differentiable rendering for pose estimation, as well as differentiable cryo-ET simulations.
arXiv Detail & Related papers (2024-10-10T17:10:00Z) - Markov Chain Variance Estimation: A Stochastic Approximation Approach [14.883782513177094]
We consider the problem of estimating the variance of a function defined on a Markov chain, an important step for statistical inference of the stationary mean.
We design a novel recursive estimator that requires $O(1)$ at each step, does not require any historical samples or any prior knowledge of run-length, and has optimal $O(frac1n) rate of convergence for the mean-squared error (MSE) with provable finite sample guarantees.
arXiv Detail & Related papers (2024-09-09T15:42:28Z) - Covariate shift in nonparametric regression with Markovian design [0.0]
We show that convergence rates for a smoothness risk of a Nadaraya-Watson kernel estimator are determined by the similarity between the invariant distributions associated to source and target Markov chains.
We extend the notion of a distribution exponent from Kpotufe and Martinet to kernel transfer exponents of uniformly ergodic Markov chains.
arXiv Detail & Related papers (2023-07-17T14:24:27Z) - Rosenthal-type inequalities for linear statistics of Markov chains [20.606986885851573]
We establish novel deviation bounds for additive functionals of geometrically ergodic Markov chains.
We pay special attention to the dependence of our bounds on the mixing time of the corresponding chain.
arXiv Detail & Related papers (2023-03-10T10:24:46Z) - SARAH-based Variance-reduced Algorithm for Stochastic Finite-sum
Cocoercive Variational Inequalities [137.6408511310322]
We consider the problem of finite-sum cocoercive variational inequalities.
For strongly monotone problems it is possible to achieve linear convergence to a solution using this method.
arXiv Detail & Related papers (2022-10-12T08:04:48Z) - A Concentration Bound for Distributed Stochastic Approximation [0.0]
We revisit the classical model of Tsitsiklis, Bertsekas and Athans for distributed approximation with consensus.
The main result is an analysis of this scheme using the ODE approach, leading to a high probability bound for the tracking error between suitably iterates.
arXiv Detail & Related papers (2022-10-09T13:00:32Z) - Smooth Monotone Stochastic Variational Inequalities and Saddle Point
Problems: A Survey [119.11852898082967]
This paper is a survey of methods for solving smooth (strongly) monotone variational inequalities.
To begin with, we give the foundation from which the methods eventually evolved.
Then we review methods for the general formulation, and look at the finite sum setup.
arXiv Detail & Related papers (2022-08-29T13:39:30Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - Stochastic Variance Reduction for Variational Inequality Methods [19.061953585686986]
We propose variance reduced algorithms for solving convex-concave saddle point problems, monotone variational inequalities, and monotone inclusions.
Our framework applies to extragradient, forward-backward-forward, and forward-reflected-backward methods both in Euclidean and Bregman.
arXiv Detail & Related papers (2021-02-16T18:39:16Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Robust, Accurate Stochastic Optimization for Variational Inference [68.83746081733464]
We show that common optimization methods lead to poor variational approximations if the problem is moderately large.
Motivated by these findings, we develop a more robust and accurate optimization framework by viewing the underlying algorithm as producing a Markov chain.
arXiv Detail & Related papers (2020-09-01T19:12:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.