Online Regularized Learning Algorithms in RKHS with $β$- and $φ$-Mixing Sequences
- URL: http://arxiv.org/abs/2507.05929v1
- Date: Tue, 08 Jul 2025 12:25:04 GMT
- Title: Online Regularized Learning Algorithms in RKHS with $β$- and $φ$-Mixing Sequences
- Authors: Priyanka Roy, Susanne Saminger-Platz,
- Abstract summary: We study an online regularized learning algorithm in a reproducing Hilbert kernel spaces (RKHS)<n>We analyze a strictly stationary Markov chain, where the dependence structure is characterized by the (phi)- and (beta)-mixing coefficients.
- Score: 1.1510009152620668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we study an online regularized learning algorithm in a reproducing kernel Hilbert spaces (RKHS) based on a class of dependent processes. We choose such a process where the degree of dependence is measured by mixing coefficients. As a representative example, we analyze a strictly stationary Markov chain, where the dependence structure is characterized by the \(\phi\)- and \(\beta\)-mixing coefficients. Under these assumptions, we derive probabilistic upper bounds as well as convergence rates for both the exponential and polynomial decay of the mixing coefficients.
Related papers
- Gaussian Processes and Reproducing Kernels: Connections and Equivalences [38.51130427120958]
monograph studies the relations between two approaches using positive definite kernels: probabilistic methods using Gaussian processes, and non-probabilistic methods using reproducing kernel Hilbert spaces (RKHS)<n>They are widely studied and used in machine learning, statistics, and numerical analysis.
arXiv Detail & Related papers (2025-06-20T12:08:18Z) - Uncertainty quantification for Markov chains with application to temporal difference learning [63.49764856675643]
We develop novel high-dimensional concentration inequalities and Berry-Esseen bounds for vector- and matrix-valued functions of Markov chains.<n>We analyze the TD learning algorithm, a widely used method for policy evaluation in reinforcement learning.
arXiv Detail & Related papers (2025-02-19T15:33:55Z) - Gradient Descent Algorithm in Hilbert Spaces under Stationary Markov Chains with $φ$- and $β$-Mixing [1.1510009152620668]
In this paper, we study a strictly stationary Markov chain descent algorithm operating in general Hilbert spaces.<n>Our analysis focuses on the mixing coefficients of the underlying process, specifically the $phi$- and $$beta-mixing coefficients.
arXiv Detail & Related papers (2025-02-05T19:09:07Z) - Adaptive Annealed Importance Sampling with Constant Rate Progress [68.8204255655161]
Annealed Importance Sampling (AIS) synthesizes weighted samples from an intractable distribution.
We propose the Constant Rate AIS algorithm and its efficient implementation for $alpha$-divergences.
arXiv Detail & Related papers (2023-06-27T08:15:28Z) - Non-Parametric Learning of Stochastic Differential Equations with Non-asymptotic Fast Rates of Convergence [65.63201894457404]
We propose a novel non-parametric learning paradigm for the identification of drift and diffusion coefficients of non-linear differential equations.<n>The key idea essentially consists of fitting a RKHS-based approximation of the corresponding Fokker-Planck equation to such observations.
arXiv Detail & Related papers (2023-05-24T20:43:47Z) - Ensemble Recognition in Reproducing Kernel Hilbert Spaces through
Aggregated Measurements [0.0]
We study the problem of learning dynamical properties of ensemble systems from their collective behaviors using statistical approaches in reproducing kernel Hilbert space (RKHS)
We provide a framework to identify and cluster multiple ensemble systems through computing the maximum mean discrepancy (MMD) between their aggregated measurements in an RKHS.
We demonstrate that the proposed approaches can be extended to cluster multiple unknown ensembles in RKHS using their aggregated measurements.
arXiv Detail & Related papers (2021-12-28T22:04:03Z) - Nonlinear Independent Component Analysis for Continuous-Time Signals [85.59763606620938]
We study the classical problem of recovering a multidimensional source process from observations of mixtures of this process.
We show that this recovery is possible for many popular models of processes (up to order and monotone scaling of their coordinates) if the mixture is given by a sufficiently differentiable, invertible function.
arXiv Detail & Related papers (2021-02-04T20:28:44Z) - Sampling in Combinatorial Spaces with SurVAE Flow Augmented MCMC [83.48593305367523]
Hybrid Monte Carlo is a powerful Markov Chain Monte Carlo method for sampling from complex continuous distributions.
We introduce a new approach based on augmenting Monte Carlo methods with SurVAE Flows to sample from discrete distributions.
We demonstrate the efficacy of our algorithm on a range of examples from statistics, computational physics and machine learning, and observe improvements compared to alternative algorithms.
arXiv Detail & Related papers (2021-02-04T02:21:08Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Convergence of Recursive Stochastic Algorithms using Wasserstein
Divergence [4.688616907736838]
We show that convergence of a large family of constant stepsize RSAs can be understood using this framework.
We show that convergence of a large family of constant stepsize RSAs can be understood using this framework.
arXiv Detail & Related papers (2020-03-25T13:45:16Z) - Distributed Learning with Dependent Samples [17.075804626858748]
We derive optimal learning rate for distributed kernel ridge regression for strong mixing sequences.
Our results extend the applicable range of distributed learning from i.i.d. samples to non-i.i.d. sequences.
arXiv Detail & Related papers (2020-02-10T14:03:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.