Estimation of Riemannian distances between covariance operators and
Gaussian processes
- URL: http://arxiv.org/abs/2108.11683v1
- Date: Thu, 26 Aug 2021 09:57:47 GMT
- Title: Estimation of Riemannian distances between covariance operators and
Gaussian processes
- Authors: Ha Quang Minh
- Abstract summary: We study two distances between infinite-dimensional positive definite Hilbert-Schmidt operators.
Results show that both distances converge in the Hilbert-Schmidt norm.
- Score: 0.7360807642941712
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work we study two Riemannian distances between infinite-dimensional
positive definite Hilbert-Schmidt operators, namely affine-invariant Riemannian
and Log-Hilbert-Schmidt distances, in the context of covariance operators
associated with functional stochastic processes, in particular Gaussian
processes. Our first main results show that both distances converge in the
Hilbert-Schmidt norm. Using concentration results for Hilbert space-valued
random variables, we then show that both distances can be consistently and
efficiently estimated from (i) sample covariance operators, (ii) finite,
normalized covariance matrices, and (iii) finite samples generated by the given
processes, all with dimension-independent convergence. Our theoretical analysis
exploits extensively the methodology of reproducing kernel Hilbert space (RKHS)
covariance and cross-covariance operators. The theoretical formulation is
illustrated with numerical experiments on covariance operators of Gaussian
processes.
Related papers
- Conditioning of Banach Space Valued Gaussian Random Variables: An Approximation Approach Based on Martingales [8.81121308982678]
We investigate the conditional distributions of two Banach space valued, jointly Gaussian random variables.
We show that their means and covariances are determined by a general finite dimensional approximation scheme based upon a martingale approach.
arXiv Detail & Related papers (2024-04-04T13:57:44Z) - Sampling and estimation on manifolds using the Langevin diffusion [45.57801520690309]
Two estimators of linear functionals of $mu_phi $ based on the discretized Markov process are considered.
Error bounds are derived for sampling and estimation using a discretization of an intrinsically defined Langevin diffusion.
arXiv Detail & Related papers (2023-12-22T18:01:11Z) - Posterior Contraction Rates for Mat\'ern Gaussian Processes on
Riemannian Manifolds [51.68005047958965]
We show that intrinsic Gaussian processes can achieve better performance in practice.
Our work shows that finer-grained analyses are needed to distinguish between different levels of data-efficiency.
arXiv Detail & Related papers (2023-09-19T20:30:58Z) - Gaussian Processes on Distributions based on Regularized Optimal
Transport [2.905751301655124]
We present a novel kernel over the space of probability measures based on the dual formulation of optimal regularized transport.
We prove that this construction enables to obtain a valid kernel, by using the Hilbert norms.
We provide theoretical guarantees on the behaviour of a Gaussian process based on this kernel.
arXiv Detail & Related papers (2022-10-12T20:30:23Z) - Kullback-Leibler and Renyi divergences in reproducing kernel Hilbert
space and Gaussian process settings [0.0]
We present formulations for regularized Kullback-Leibler and R'enyi divergences via the Alpha Log-Determinant (Log-Det) divergences.
For characteristic kernels, the first setting leads to divergences between arbitrary Borel probability measures on a complete, separable metric space.
We show that the Alpha Log-Det divergences are continuous in the Hilbert-Schmidt norm, which enables us to apply laws of large numbers for Hilbert space-valued random variables.
arXiv Detail & Related papers (2022-07-18T06:40:46Z) - Optimal variance-reduced stochastic approximation in Banach spaces [114.8734960258221]
We study the problem of estimating the fixed point of a contractive operator defined on a separable Banach space.
We establish non-asymptotic bounds for both the operator defect and the estimation error.
arXiv Detail & Related papers (2022-01-21T02:46:57Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Finite sample approximations of exact and entropic Wasserstein distances
between covariance operators and Gaussian processes [0.0]
We show that the Sinkhorn divergence between two centered Gaussian processes can be consistently and efficiently estimated.
For a fixed regularization parameter, the convergence rates are it dimension-independent and of the same order as those for the Hilbert-Schmidt distance.
If at least one of the RKHS is finite-dimensional, we obtain a it dimension-dependent sample complexity for the exact Wasserstein distance between the Gaussian processes.
arXiv Detail & Related papers (2021-04-26T06:57:14Z) - The Connection between Discrete- and Continuous-Time Descriptions of
Gaussian Continuous Processes [60.35125735474386]
We show that discretizations yielding consistent estimators have the property of invariance under coarse-graining'
This result explains why combining differencing schemes for derivatives reconstruction and local-in-time inference approaches does not work for time series analysis of second or higher order differential equations.
arXiv Detail & Related papers (2021-01-16T17:11:02Z) - Optimal oracle inequalities for solving projected fixed-point equations [53.31620399640334]
We study methods that use a collection of random observations to compute approximate solutions by searching over a known low-dimensional subspace of the Hilbert space.
We show how our results precisely characterize the error of a class of temporal difference learning methods for the policy evaluation problem with linear function approximation.
arXiv Detail & Related papers (2020-12-09T20:19:32Z) - Kernel Autocovariance Operators of Stationary Processes: Estimation and
Convergence [0.5505634045241288]
We consider autocovariance operators of a stationary process on a Polish space embedded into a kernel reproducing Hilbert space.
We investigate how empirical estimates of these operators converge along realizations of the process under various conditions.
We provide applications of our theory in terms of consistency results for kernel PCA with dependent data and the conditional mean embedding of transition probabilities.
arXiv Detail & Related papers (2020-04-02T09:17:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.