On the strong stability of ergodic iterations
- URL: http://arxiv.org/abs/2304.04657v4
- Date: Mon, 5 Feb 2024 11:54:01 GMT
- Title: On the strong stability of ergodic iterations
- Authors: L\'aszl\'o Gy\"orfi, Attila Lovas, Mikl\'os R\'asonyi
- Abstract summary: We revisit processes generated by iterated random functions driven by a stationary and ergodic sequence.
New results are deduced for Langevin-type iterations with dependent noise and for multitype branching processes.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We revisit processes generated by iterated random functions driven by a
stationary and ergodic sequence. Such a process is called strongly stable if a
random initialization exists, for which the process is stationary and ergodic,
and for any other initialization, the difference between the two processes
converges to zero almost surely. Under some mild conditions on the
corresponding recursive map, without any condition on the driving sequence, we
show the strong stability of iterations. Several applications are surveyed such
as generalized autoregression and queuing. Furthermore, new results are deduced
for Langevin-type iterations with dependent noise and for multitype branching
processes.
Related papers
- On the asymptotic behaviour of stochastic processes, with applications to supermartingale convergence, Dvoretzky's approximation theorem, and stochastic quasi-Fejér monotonicity [0.0]
We prove a novel and general result on the behavior of processes which conform to a certain relaxed supermartingale condition.<n>We derive new quantitative versions of well-known concepts and theorems from approximation.<n>We conclude by illustrating the breadth of potential applications with a brief discussion on a variety of other well-known iterative procedures from approximation.
arXiv Detail & Related papers (2025-04-17T13:11:26Z) - Restoration-Degradation Beyond Linear Diffusions: A Non-Asymptotic
Analysis For DDIM-Type Samplers [90.45898746733397]
We develop a framework for non-asymptotic analysis of deterministic samplers used for diffusion generative modeling.
We show that one step along the probability flow ODE can be expressed as two steps: 1) a restoration step that runs ascent on the conditional log-likelihood at some infinitesimally previous time, and 2) a degradation step that runs the forward process using noise pointing back towards the current gradient.
arXiv Detail & Related papers (2023-03-06T18:59:19Z) - Numerically Stable Sparse Gaussian Processes via Minimum Separation
using Cover Trees [57.67528738886731]
We study the numerical stability of scalable sparse approximations based on inducing points.
For low-dimensional tasks such as geospatial modeling, we propose an automated method for computing inducing points satisfying these conditions.
arXiv Detail & Related papers (2022-10-14T15:20:17Z) - Boosting the Confidence of Generalization for $L_2$-Stable Randomized
Learning Algorithms [41.082982732100696]
We show that a properly designed subbagging process leads to near-tight exponential generalization bounds over both data and algorithm.
We further derive generic results to improve high-probability generalization bounds for convex or non-tight problems with natural decaying learning rates.
arXiv Detail & Related papers (2022-06-08T12:14:01Z) - End-to-end symbolic regression with transformers [20.172752966322214]
Symbolic magnitude regression is a difficult task which usually involves predicting the two-step procedure faster.
We show that our model approaches the end-to-end approach Neural the constants as an informed Transformer.
arXiv Detail & Related papers (2022-04-22T06:55:43Z) - Improved Convergence Rate of Stochastic Gradient Langevin Dynamics with
Variance Reduction and its Application to Optimization [50.83356836818667]
gradient Langevin Dynamics is one of the most fundamental algorithms to solve non-eps optimization problems.
In this paper, we show two variants of this kind, namely the Variance Reduced Langevin Dynamics and the Recursive Gradient Langevin Dynamics.
arXiv Detail & Related papers (2022-03-30T11:39:00Z) - Polynomial convergence of iterations of certain random operators in
Hilbert space [2.732936573198251]
We study the convergence of random iterative sequence of a family of operators on infinite dimensional spaces inspired bySGD algorithm.
We demonstrate that its convergence rate depends on the initial state, while randomness plays a role only in the choice of the best constant factor.
arXiv Detail & Related papers (2022-02-04T17:48:29Z) - Better Regularization for Sequential Decision Spaces: Fast Convergence
Rates for Nash, Correlated, and Team Equilibria [121.36609493711292]
We study the application of iterative first-order methods to the problem of computing equilibria of large-scale two-player extensive-form games.
By instantiating first-order methods with our regularizers, we develop the first accelerated first-order methods for computing correlated equilibra and ex-ante coordinated team equilibria.
arXiv Detail & Related papers (2021-05-27T06:10:24Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Adaptive Sequential SAA for Solving Two-stage Stochastic Linear Programs [1.6181085766811525]
We present adaptive sequential SAA (sample average approximation) algorithms to solve large-scale two-stage linear programs.
The proposed algorithm can be stopped in finite-time to return a solution endowed with a probabilistic guarantee on quality.
arXiv Detail & Related papers (2020-12-07T14:58:16Z) - ROOT-SGD: Sharp Nonasymptotics and Near-Optimal Asymptotics in a Single Algorithm [71.13558000599839]
We study the problem of solving strongly convex and smooth unconstrained optimization problems using first-order algorithms.
We devise a novel, referred to as Recursive One-Over-T SGD, based on an easily implementable, averaging of past gradients.
We prove that it simultaneously achieves state-of-the-art performance in both a finite-sample, nonasymptotic sense and an sense.
arXiv Detail & Related papers (2020-08-28T14:46:56Z) - Stochastic Approximation with Markov Noise: Analysis and applications in
reinforcement learning [0.0]
We present for the first time an convergence analysis of two time-scale approximation driven by "controlled" Markov noise.
We analyze the behavior of our framework by relating it to limiting differential inclusions in both time scales.
We obtain the first informative error bounds on function approximation for the policy evaluation algorithm.
arXiv Detail & Related papers (2020-04-08T03:59:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.