Gaussian Processes on Distributions based on Regularized Optimal
Transport
- URL: http://arxiv.org/abs/2210.06574v1
- Date: Wed, 12 Oct 2022 20:30:23 GMT
- Title: Gaussian Processes on Distributions based on Regularized Optimal
Transport
- Authors: Fran\c{c}ois Bachoc, Louis B\'ethune, Alberto Gonzalez-Sanz,
Jean-Michel Loubes
- Abstract summary: We present a novel kernel over the space of probability measures based on the dual formulation of optimal regularized transport.
We prove that this construction enables to obtain a valid kernel, by using the Hilbert norms.
We provide theoretical guarantees on the behaviour of a Gaussian process based on this kernel.
- Score: 2.905751301655124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We present a novel kernel over the space of probability measures based on the
dual formulation of optimal regularized transport. We propose an Hilbertian
embedding of the space of probabilities using their Sinkhorn potentials, which
are solutions of the dual entropic relaxed optimal transport between the
probabilities and a reference measure $\mathcal{U}$. We prove that this
construction enables to obtain a valid kernel, by using the Hilbert norms. We
prove that the kernel enjoys theoretical properties such as universality and
some invariances, while still being computationally feasible. Moreover we
provide theoretical guarantees on the behaviour of a Gaussian process based on
this kernel. The empirical performances are compared with other traditional
choices of kernels for processes indexed on distributions.
Related papers
- Distributed Markov Chain Monte Carlo Sampling based on the Alternating
Direction Method of Multipliers [143.6249073384419]
In this paper, we propose a distributed sampling scheme based on the alternating direction method of multipliers.
We provide both theoretical guarantees of our algorithm's convergence and experimental evidence of its superiority to the state-of-the-art.
In simulation, we deploy our algorithm on linear and logistic regression tasks and illustrate its fast convergence compared to existing gradient-based methods.
arXiv Detail & Related papers (2024-01-29T02:08:40Z) - Recursive Estimation of Conditional Kernel Mean Embeddings [0.0]
Kernel mean embeddings map probability distributions to elements of a kernel reproducing Hilbert space (RKHS)
We present a new algorithm to estimate the conditional kernel mean map in a Hilbert space valued $L$ space, that is in a Bochner space.
arXiv Detail & Related papers (2023-02-12T16:55:58Z) - Fast Computation of Optimal Transport via Entropy-Regularized Extragradient Methods [75.34939761152587]
Efficient computation of the optimal transport distance between two distributions serves as an algorithm that empowers various applications.
This paper develops a scalable first-order optimization-based method that computes optimal transport to within $varepsilon$ additive accuracy.
arXiv Detail & Related papers (2023-01-30T15:46:39Z) - Kullback-Leibler and Renyi divergences in reproducing kernel Hilbert
space and Gaussian process settings [0.0]
We present formulations for regularized Kullback-Leibler and R'enyi divergences via the Alpha Log-Determinant (Log-Det) divergences.
For characteristic kernels, the first setting leads to divergences between arbitrary Borel probability measures on a complete, separable metric space.
We show that the Alpha Log-Det divergences are continuous in the Hilbert-Schmidt norm, which enables us to apply laws of large numbers for Hilbert space-valued random variables.
arXiv Detail & Related papers (2022-07-18T06:40:46Z) - Experimental Design for Linear Functionals in Reproducing Kernel Hilbert
Spaces [102.08678737900541]
We provide algorithms for constructing bias-aware designs for linear functionals.
We derive non-asymptotic confidence sets for fixed and adaptive designs under sub-Gaussian noise.
arXiv Detail & Related papers (2022-05-26T20:56:25Z) - Meta-Learning Hypothesis Spaces for Sequential Decision-making [79.73213540203389]
We propose to meta-learn a kernel from offline data (Meta-KeL)
Under mild conditions, we guarantee that our estimated RKHS yields valid confidence sets.
We also empirically evaluate the effectiveness of our approach on a Bayesian optimization task.
arXiv Detail & Related papers (2022-02-01T17:46:51Z) - Optimal policy evaluation using kernel-based temporal difference methods [78.83926562536791]
We use kernel Hilbert spaces for estimating the value function of an infinite-horizon discounted Markov reward process.
We derive a non-asymptotic upper bound on the error with explicit dependence on the eigenvalues of the associated kernel operator.
We prove minimax lower bounds over sub-classes of MRPs.
arXiv Detail & Related papers (2021-09-24T14:48:20Z) - Estimation of Riemannian distances between covariance operators and
Gaussian processes [0.7360807642941712]
We study two distances between infinite-dimensional positive definite Hilbert-Schmidt operators.
Results show that both distances converge in the Hilbert-Schmidt norm.
arXiv Detail & Related papers (2021-08-26T09:57:47Z) - A Note on Optimizing Distributions using Kernel Mean Embeddings [94.96262888797257]
Kernel mean embeddings represent probability measures by their infinite-dimensional mean embeddings in a reproducing kernel Hilbert space.
We show that when the kernel is characteristic, distributions with a kernel sum-of-squares density are dense.
We provide algorithms to optimize such distributions in the finite-sample setting.
arXiv Detail & Related papers (2021-06-18T08:33:45Z) - Strong Uniform Consistency with Rates for Kernel Density Estimators with
General Kernels on Manifolds [11.927892660941643]
We show how to handle kernel density estimation with intricate kernels not designed by the user.
The isotropic kernels considered in this paper are different from the kernels in the Vapnik-Chervonenkis class that are frequently considered in statistics society.
arXiv Detail & Related papers (2020-07-13T14:36:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.