A Concentration Bound for Distributed Stochastic Approximation
- URL: http://arxiv.org/abs/2210.04253v1
- Date: Sun, 9 Oct 2022 13:00:32 GMT
- Title: A Concentration Bound for Distributed Stochastic Approximation
- Authors: Harsh Dolhare and Vivek Borkar
- Abstract summary: We revisit the classical model of Tsitsiklis, Bertsekas and Athans for distributed approximation with consensus.
The main result is an analysis of this scheme using the ODE approach, leading to a high probability bound for the tracking error between suitably iterates.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We revisit the classical model of Tsitsiklis, Bertsekas and Athans for
distributed stochastic approximation with consensus. The main result is an
analysis of this scheme using the ODE approach to stochastic approximation,
leading to a high probability bound for the tracking error between suitably
interpolated iterates and the limiting differential equation. Several future
directions will also be highlighted.
Related papers
- Improving Probabilistic Diffusion Models With Optimal Covariance Matching [27.2761325416843]
We introduce a novel method for learning the diagonal covariances.
We show how our method can substantially enhance the sampling efficiency, recall rate and likelihood of both diffusion models and latent diffusion models.
arXiv Detail & Related papers (2024-06-16T05:47:12Z) - Analytical Approximation of the ELBO Gradient in the Context of the Clutter Problem [0.0]
We propose an analytical solution for approximating the gradient of the Evidence Lower Bound (ELBO) in variational inference problems.
The proposed method demonstrates good accuracy and rate of convergence together with linear computational complexity.
arXiv Detail & Related papers (2024-04-16T13:19:46Z) - Differentiating Metropolis-Hastings to Optimize Intractable Densities [51.16801956665228]
We develop an algorithm for automatic differentiation of Metropolis-Hastings samplers.
We apply gradient-based optimization to objectives expressed as expectations over intractable target densities.
arXiv Detail & Related papers (2023-06-13T17:56:02Z) - A Geometric Perspective on Diffusion Models [57.27857591493788]
We inspect the ODE-based sampling of a popular variance-exploding SDE.
We establish a theoretical relationship between the optimal ODE-based sampling and the classic mean-shift (mode-seeking) algorithm.
arXiv Detail & Related papers (2023-05-31T15:33:16Z) - Error Bounds for Flow Matching Methods [38.9898500163582]
Flow matching methods approximate a flow between two arbitrary probability distributions.
We present error bounds for the flow matching procedure using fully deterministic sampling, assuming an $L2$ bound on the approximation error and a certain regularity on the data distributions.
arXiv Detail & Related papers (2023-05-26T12:13:53Z) - The Past Does Matter: Correlation of Subsequent States in Trajectory
Predictions of Gaussian Process Models [0.7734726150561089]
We consider approximations of the model's output and trajectory distribution.
We show that previous work on uncertainty propagation incorrectly included an independence assumption between subsequent states of the predicted trajectories.
arXiv Detail & Related papers (2022-11-20T22:19:39Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - Nonconvex Stochastic Scaled-Gradient Descent and Generalized Eigenvector
Problems [98.34292831923335]
Motivated by the problem of online correlation analysis, we propose the emphStochastic Scaled-Gradient Descent (SSD) algorithm.
We bring these ideas together in an application to online correlation analysis, deriving for the first time an optimal one-time-scale algorithm with an explicit rate of local convergence to normality.
arXiv Detail & Related papers (2021-12-29T18:46:52Z) - Scalable Variational Gaussian Processes via Harmonic Kernel
Decomposition [54.07797071198249]
We introduce a new scalable variational Gaussian process approximation which provides a high fidelity approximation while retaining general applicability.
We demonstrate that, on a range of regression and classification problems, our approach can exploit input space symmetries such as translations and reflections.
Notably, our approach achieves state-of-the-art results on CIFAR-10 among pure GP models.
arXiv Detail & Related papers (2021-06-10T18:17:57Z) - Stochastic Normalizing Flows [52.92110730286403]
We introduce normalizing flows for maximum likelihood estimation and variational inference (VI) using differential equations (SDEs)
Using the theory of rough paths, the underlying Brownian motion is treated as a latent variable and approximated, enabling efficient training of neural SDEs.
These SDEs can be used for constructing efficient chains to sample from the underlying distribution of a given dataset.
arXiv Detail & Related papers (2020-02-21T20:47:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.