Convergence and finite sample approximations of entropic regularized
Wasserstein distances in Gaussian and RKHS settings
- URL: http://arxiv.org/abs/2101.01429v2
- Date: Mon, 15 Feb 2021 09:44:14 GMT
- Title: Convergence and finite sample approximations of entropic regularized
Wasserstein distances in Gaussian and RKHS settings
- Authors: Minh Ha Quang
- Abstract summary: We study the convergence and finite sample approximations of entropic regularized Wasserstein distances in the Hilbert space setting.
For Gaussian measures on an infinite-dimensional Hilbert space, convergence in the 2-Sinkhorn divergence is weaker than convergence in the exact 2-Wasserstein distance.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work studies the convergence and finite sample approximations of
entropic regularized Wasserstein distances in the Hilbert space setting. Our
first main result is that for Gaussian measures on an infinite-dimensional
Hilbert space, convergence in the 2-Sinkhorn divergence is {\it strictly
weaker} than convergence in the exact 2-Wasserstein distance. Specifically, a
sequence of centered Gaussian measures converges in the 2-Sinkhorn divergence
if the corresponding covariance operators converge in the Hilbert-Schmidt norm.
This is in contrast to the previous known result that a sequence of centered
Gaussian measures converges in the exact 2-Wasserstein distance if and only if
the covariance operators converge in the trace class norm. In the reproducing
kernel Hilbert space (RKHS) setting, the {\it kernel Gaussian-Sinkhorn
divergence}, which is the Sinkhorn divergence between Gaussian measures defined
on an RKHS, defines a semi-metric on the set of Borel probability measures on a
Polish space, given a characteristic kernel on that space. With the
Hilbert-Schmidt norm convergence, we obtain {\it dimension-independent}
convergence rates for finite sample approximations of the kernel
Gaussian-Sinkhorn divergence, with the same order as the Maximum Mean
Discrepancy. These convergence rates apply in particular to Sinkhorn divergence
between Gaussian measures on Euclidean and infinite-dimensional Hilbert spaces.
The sample complexity for the 2-Wasserstein distance between Gaussian measures
on Euclidean space, while dimension-dependent and larger than that of the
Sinkhorn divergence, is exponentially faster than the worst case scenario in
the literature.
Related papers
- Learning with Norm Constrained, Over-parameterized, Two-layer Neural Networks [54.177130905659155]
Recent studies show that a reproducing kernel Hilbert space (RKHS) is not a suitable space to model functions by neural networks.
In this paper, we study a suitable function space for over- parameterized two-layer neural networks with bounded norms.
arXiv Detail & Related papers (2024-04-29T15:04:07Z) - Continuous percolation in a Hilbert space for a large system of qubits [58.720142291102135]
The percolation transition is defined through the appearance of the infinite cluster.
We show that the exponentially increasing dimensionality of the Hilbert space makes its covering by finite-size hyperspheres inefficient.
Our approach to the percolation transition in compact metric spaces may prove useful for its rigorous treatment in other contexts.
arXiv Detail & Related papers (2022-10-15T13:53:21Z) - Kullback-Leibler and Renyi divergences in reproducing kernel Hilbert
space and Gaussian process settings [0.0]
We present formulations for regularized Kullback-Leibler and R'enyi divergences via the Alpha Log-Determinant (Log-Det) divergences.
For characteristic kernels, the first setting leads to divergences between arbitrary Borel probability measures on a complete, separable metric space.
We show that the Alpha Log-Det divergences are continuous in the Hilbert-Schmidt norm, which enables us to apply laws of large numbers for Hilbert space-valued random variables.
arXiv Detail & Related papers (2022-07-18T06:40:46Z) - Estimation of Riemannian distances between covariance operators and
Gaussian processes [0.7360807642941712]
We study two distances between infinite-dimensional positive definite Hilbert-Schmidt operators.
Results show that both distances converge in the Hilbert-Schmidt norm.
arXiv Detail & Related papers (2021-08-26T09:57:47Z) - Spectral clustering under degree heterogeneity: a case for the random
walk Laplacian [83.79286663107845]
This paper shows that graph spectral embedding using the random walk Laplacian produces vector representations which are completely corrected for node degree.
In the special case of a degree-corrected block model, the embedding concentrates about K distinct points, representing communities.
arXiv Detail & Related papers (2021-05-03T16:36:27Z) - Finite sample approximations of exact and entropic Wasserstein distances
between covariance operators and Gaussian processes [0.0]
We show that the Sinkhorn divergence between two centered Gaussian processes can be consistently and efficiently estimated.
For a fixed regularization parameter, the convergence rates are it dimension-independent and of the same order as those for the Hilbert-Schmidt distance.
If at least one of the RKHS is finite-dimensional, we obtain a it dimension-dependent sample complexity for the exact Wasserstein distance between the Gaussian processes.
arXiv Detail & Related papers (2021-04-26T06:57:14Z) - Entropic regularization of Wasserstein distance between
infinite-dimensional Gaussian measures and Gaussian processes [0.0]
This work studies the entropic regularization formulation of the 2-Wasserstein distance on an infinite-dimensional Hilbert space.
In the infinite-dimensional setting, both the entropic 2-Wasserstein distance and Sinkhorn divergence are Fr'echet differentiable, in contrast to the exact 2-Wasserstein distance.
arXiv Detail & Related papers (2020-11-15T10:03:12Z) - Metrizing Weak Convergence with Maximum Mean Discrepancies [88.54422104669078]
This paper characterizes the maximum mean discrepancies (MMD) that metrize the weak convergence of probability measures for a wide class of kernels.
We prove that, on a locally compact, non-compact, Hausdorff space, the MMD of a bounded continuous Borel measurable kernel k, metrizes the weak convergence of probability measures if and only if k is continuous.
arXiv Detail & Related papers (2020-06-16T15:49:33Z) - The Convergence Indicator: Improved and completely characterized
parameter bounds for actual convergence of Particle Swarm Optimization [68.8204255655161]
We introduce a new convergence indicator that can be used to calculate whether the particles will finally converge to a single point or diverge.
Using this convergence indicator we provide the actual bounds completely characterizing parameter regions that lead to a converging swarm.
arXiv Detail & Related papers (2020-06-06T19:08:05Z) - Entropy-Regularized $2$-Wasserstein Distance between Gaussian Measures [2.320417845168326]
We study the Gaussian geometry under the entropy-regularized 2-Wasserstein distance.
We provide a fixed-point characterization of a population barycenter when restricted to the manifold of Gaussians.
As the geometries change by varying the regularization magnitude, we study the limiting cases of vanishing and infinite magnitudes.
arXiv Detail & Related papers (2020-06-05T13:18:57Z) - Debiased Sinkhorn barycenters [110.79706180350507]
Entropy regularization in optimal transport (OT) has been the driver of many recent interests for Wasserstein metrics and barycenters in machine learning.
We show how this bias is tightly linked to the reference measure that defines the entropy regularizer.
We propose debiased Wasserstein barycenters that preserve the best of both worlds: fast Sinkhorn-like iterations without entropy smoothing.
arXiv Detail & Related papers (2020-06-03T23:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.