Random Inverse Problems Over Graphs: Decentralized Online Learning
- URL: http://arxiv.org/abs/2303.11789v6
- Date: Fri, 12 Jul 2024 00:17:04 GMT
- Title: Random Inverse Problems Over Graphs: Decentralized Online Learning
- Authors: Tao Li, Xiwei Zhang,
- Abstract summary: We establish a framework of distributed random inverse problems over network graphs with online measurements.
We develop the L2 -asymptotic stability theory in the infinite Hilbert spaces.
We propose a decentralized online learning algorithm in RKHS based on non-stationary and non-independent online data.
- Score: 4.5692679976952215
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We establish a framework of distributed random inverse problems over network graphs with online measurements, and propose a decentralized online learning algorithm. This unifies the distributed parameter estimation in Hilbert spaces and the least mean square problem in reproducing kernel Hilbert spaces (RKHS-LMS). We transform the convergence of the algorithm into the asymptotic stability of a class of inhomogeneous random difference equations in Hilbert spaces with L2-bounded martingale difference terms and develop the L2 -asymptotic stability theory in Hilbert spaces. It is shown that if the network graph is connected and the sequence of forward operators satisfies the infinite-dimensional spatio-temporal persistence of excitation condition, then the estimates of all nodes are mean square and almost surely strongly consistent. Moreover, we propose a decentralized online learning algorithm in RKHS based on non-stationary and non-independent online data streams, and prove that the algorithm is mean square and almost surely strongly consistent if the operators induced by the random input data satisfy the infinite-dimensional spatio-temporal persistence of excitation condition.
Related papers
- Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Convergence Conditions of Online Regularized Statistical Learning in Reproducing Kernel Hilbert Space With Non-Stationary Data [4.5692679976952215]
We study the convergence of regularized learning algorithms in the reproducing kernel HilbertRKHS with dependent and non-stationary online data streams.
For independent and non-identically distributed data streams, the algorithm achieves the mean square consistency.
arXiv Detail & Related papers (2024-04-04T05:35:59Z) - Kullback-Leibler and Renyi divergences in reproducing kernel Hilbert
space and Gaussian process settings [0.0]
We present formulations for regularized Kullback-Leibler and R'enyi divergences via the Alpha Log-Determinant (Log-Det) divergences.
For characteristic kernels, the first setting leads to divergences between arbitrary Borel probability measures on a complete, separable metric space.
We show that the Alpha Log-Det divergences are continuous in the Hilbert-Schmidt norm, which enables us to apply laws of large numbers for Hilbert space-valued random variables.
arXiv Detail & Related papers (2022-07-18T06:40:46Z) - Decentralized Online Regularized Learning Over Random Time-Varying Graphs [4.065547532041163]
We develop the online regularized linear regression algorithm over random time-varying graphs.
We prove that the regret upper bound is $O(T1-tauln T)$, where $tauin (0.5,1)$ is a constant depending on the algorithm gains.
In addition, we prove that the regret upper bound is $O(T1-tauln T)$, where $tauin (0.5,1)$ is a constant depending on the algorithm gains.
arXiv Detail & Related papers (2022-06-07T12:55:08Z) - First-Order Algorithms for Nonlinear Generalized Nash Equilibrium
Problems [88.58409977434269]
We consider the problem of computing an equilibrium in a class of nonlinear generalized Nash equilibrium problems (NGNEPs)
Our contribution is to provide two simple first-order algorithmic frameworks based on the quadratic penalty method and the augmented Lagrangian method.
We provide nonasymptotic theoretical guarantees for these algorithms.
arXiv Detail & Related papers (2022-04-07T00:11:05Z) - Polynomial convergence of iterations of certain random operators in
Hilbert space [2.732936573198251]
We study the convergence of random iterative sequence of a family of operators on infinite dimensional spaces inspired bySGD algorithm.
We demonstrate that its convergence rate depends on the initial state, while randomness plays a role only in the choice of the best constant factor.
arXiv Detail & Related papers (2022-02-04T17:48:29Z) - Global convergence of ResNets: From finite to infinite width using
linear parameterization [0.0]
We study Residual Networks (ResNets) in which the residual block has linear parametrization while still being nonlinear.
In this limit, we prove a local Polyak-Lojasiewicz inequality, retrieving the lazy regime.
Our analysis leads to a practical and quantified recipe.
arXiv Detail & Related papers (2021-12-10T13:38:08Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Lifting the Convex Conjugate in Lagrangian Relaxations: A Tractable
Approach for Continuous Markov Random Fields [53.31927549039624]
We show that a piecewise discretization preserves better contrast from existing discretization problems.
We apply this theory to the problem of matching two images.
arXiv Detail & Related papers (2021-07-13T12:31:06Z) - On the Convergence of Stochastic Extragradient for Bilinear Games with
Restarted Iteration Averaging [96.13485146617322]
We present an analysis of the ExtraGradient (SEG) method with constant step size, and present variations of the method that yield favorable convergence.
We prove that when augmented with averaging, SEG provably converges to the Nash equilibrium, and such a rate is provably accelerated by incorporating a scheduled restarting procedure.
arXiv Detail & Related papers (2021-06-30T17:51:36Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.