Entropic Gromov-Wasserstein between Gaussian Distributions
- URL: http://arxiv.org/abs/2108.10961v1
- Date: Tue, 24 Aug 2021 21:27:11 GMT
- Title: Entropic Gromov-Wasserstein between Gaussian Distributions
- Authors: Khang Le and Dung Le and Huy Nguyen and Dat Do and Tung Pham and Nhat
Ho
- Abstract summary: We study the entropic Gromov-Wasserstein and its unbalanced version between (unbalanced) Gaussian distributions.
When the metric is the inner product, which we refer to as inner product Gromov-Wasserstein (IGW), we demonstrate that the optimal transportation plans of entropic IGW and its unbalanced variant are (unbalanced) Gaussian distributions.
- Score: 9.624666285528612
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study the entropic Gromov-Wasserstein and its unbalanced version between
(unbalanced) Gaussian distributions with different dimensions. When the metric
is the inner product, which we refer to as inner product Gromov-Wasserstein
(IGW), we demonstrate that the optimal transportation plans of entropic IGW and
its unbalanced variant are (unbalanced) Gaussian distributions. Via an
application of von Neumann's trace inequality, we obtain closed-form
expressions for the entropic IGW between these Gaussian distributions. Finally,
we consider an entropic inner product Gromov-Wasserstein barycenter of multiple
Gaussian distributions. We prove that the barycenter is Gaussian distribution
when the entropic regularization parameter is small. We further derive
closed-form expressions for the covariance matrix of the barycenter.
Related papers
- Theoretical Guarantees for Variational Inference with Fixed-Variance Mixture of Gaussians [27.20127082606962]
Variational inference (VI) is a popular approach in Bayesian inference.
This work aims to contribute to the theoretical study of VI in the non-Gaussian case.
arXiv Detail & Related papers (2024-06-06T12:38:59Z) - Score-based generative models are provably robust: an uncertainty quantification perspective [4.396860522241307]
We show that score-based generative models (SGMs) are provably robust to the multiple sources of error in practical implementation.
Our primary tool is the Wasserstein uncertainty propagation (WUP) theorem.
We show how errors due to (a) finite sample approximation, (b) early stopping, (c) score-matching objective choice, (d) score function parametrization, and (e) reference distribution choice, impact the quality of the generative model.
arXiv Detail & Related papers (2024-05-24T17:50:17Z) - Wiener Chaos in Kernel Regression: Towards Untangling Aleatoric and Epistemic Uncertainty [0.0]
We generalize the setting and consider kernel ridge regression with additive i.i.d. nonGaussian measurement noise.
We show that our approach allows us to distinguish the uncertainty that stems from the noise in the data samples from the total uncertainty encoded in the GP posterior distribution.
arXiv Detail & Related papers (2023-12-12T16:02:35Z) - Forward-backward Gaussian variational inference via JKO in the
Bures-Wasserstein Space [19.19325201882727]
Variational inference (VI) seeks to approximate a target distribution $pi$ by an element of a tractable family of distributions.
We develop the Forward-Backward Gaussian Variational Inference (FB-GVI) algorithm to solve Gaussian VI.
For our proposed algorithm, we obtain state-of-the-art convergence guarantees when $pi$ is log-smooth and log-concave.
arXiv Detail & Related papers (2023-04-10T19:49:50Z) - Thermal equilibrium in Gaussian dynamical semigroups [77.34726150561087]
We characterize all Gaussian dynamical semigroups in continuous variables quantum systems of n-bosonic modes which have a thermal Gibbs state as a stationary solution.
We also show that Alicki's quantum detailed-balance condition, based on a Gelfand-Naimark-Segal inner product, allows the determination of the temperature dependence of the diffusion and dissipation matrices.
arXiv Detail & Related papers (2022-07-11T19:32:17Z) - Wrapped Distributions on homogeneous Riemannian manifolds [58.720142291102135]
Control over distributions' properties, such as parameters, symmetry and modality yield a family of flexible distributions.
We empirically validate our approach by utilizing our proposed distributions within a variational autoencoder and a latent space network model.
arXiv Detail & Related papers (2022-04-20T21:25:21Z) - Non-Gaussian Component Analysis via Lattice Basis Reduction [56.98280399449707]
Non-Gaussian Component Analysis (NGCA) is a distribution learning problem.
We provide an efficient algorithm for NGCA in the regime that $A$ is discrete or nearly discrete.
arXiv Detail & Related papers (2021-12-16T18:38:02Z) - A Note on Optimizing Distributions using Kernel Mean Embeddings [94.96262888797257]
Kernel mean embeddings represent probability measures by their infinite-dimensional mean embeddings in a reproducing kernel Hilbert space.
We show that when the kernel is characteristic, distributions with a kernel sum-of-squares density are dense.
We provide algorithms to optimize such distributions in the finite-sample setting.
arXiv Detail & Related papers (2021-06-18T08:33:45Z) - $\alpha$-Geodesical Skew Divergence [5.3556221126231085]
The asymmetric skew divergence smooths one of the distributions by mixing it, to a degree determined by the parameter $lambda$, with the other distribution.
Such divergence is an approximation of the KL divergence that does not require the target distribution to be absolutely continuous with respect to the source distribution.
arXiv Detail & Related papers (2021-03-31T13:27:58Z) - Linear Optimal Transport Embedding: Provable Wasserstein classification
for certain rigid transformations and perturbations [79.23797234241471]
Discriminating between distributions is an important problem in a number of scientific fields.
The Linear Optimal Transportation (LOT) embeds the space of distributions into an $L2$-space.
We demonstrate the benefits of LOT on a number of distribution classification problems.
arXiv Detail & Related papers (2020-08-20T19:09:33Z) - Debiased Sinkhorn barycenters [110.79706180350507]
Entropy regularization in optimal transport (OT) has been the driver of many recent interests for Wasserstein metrics and barycenters in machine learning.
We show how this bias is tightly linked to the reference measure that defines the entropy regularizer.
We propose debiased Wasserstein barycenters that preserve the best of both worlds: fast Sinkhorn-like iterations without entropy smoothing.
arXiv Detail & Related papers (2020-06-03T23:06:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.