Fast Approximation of the Generalized Sliced-Wasserstein Distance
- URL: http://arxiv.org/abs/2210.10268v1
- Date: Wed, 19 Oct 2022 03:15:50 GMT
- Title: Fast Approximation of the Generalized Sliced-Wasserstein Distance
- Authors: Dung Le, Huy Nguyen, Khai Nguyen, Trang Nguyen, Nhat Ho
- Abstract summary: Generalized sliced Wasserstein distance is a variant of sliced Wasserstein distance.
We propose to form deterministic and fast approximations of generalized sliced Wasserstein distance.
- Score: 13.840237470871406
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generalized sliced Wasserstein distance is a variant of sliced Wasserstein
distance that exploits the power of non-linear projection through a given
defining function to better capture the complex structures of the probability
distributions. Similar to sliced Wasserstein distance, generalized sliced
Wasserstein is defined as an expectation over random projections which can be
approximated by the Monte Carlo method. However, the complexity of that
approximation can be expensive in high-dimensional settings. To that end, we
propose to form deterministic and fast approximations of the generalized sliced
Wasserstein distance by using the concentration of random projections when the
defining functions are polynomial function, circular function, and neural
network type function. Our approximations hinge upon an important result that
one-dimensional projections of a high-dimensional random vector are
approximately Gaussian.
Related papers
- Sliced Wasserstein with Random-Path Projecting Directions [49.802024788196434]
We propose an optimization-free slicing distribution that provides a fast sampling for the Monte Carlo estimation of expectation.
We derive the random-path slicing distribution (RPSD) and two variants of sliced Wasserstein, i.e., the Random-Path Projection Sliced Wasserstein (RPSW) and the Importance Weighted Random-Path Projection Sliced Wasserstein (IWRPSW)
arXiv Detail & Related papers (2024-01-29T04:59:30Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - Statistical, Robustness, and Computational Guarantees for Sliced
Wasserstein Distances [18.9717974398864]
Sliced Wasserstein distances preserve properties of classic Wasserstein distances while being more scalable for computation and estimation in high dimensions.
We quantify this scalability from three key aspects: (i) empirical convergence rates; (ii) robustness to data contamination; and (iii) efficient computational methods.
arXiv Detail & Related papers (2022-10-17T15:04:51Z) - Understanding Entropic Regularization in GANs [5.448283690603358]
We study the influence of regularization on the learned solution of Wasserstein distance.
We show that entropy regularization promotes the solution sparsification, while replacing the Wasserstein distance with the Sinkhorn divergence recovers the unregularized solution.
We conclude that these regularization techniques can improve the quality of the generator learned from empirical data for a large class of distributions.
arXiv Detail & Related papers (2021-11-02T06:08:16Z) - Fast Approximation of the Sliced-Wasserstein Distance Using
Concentration of Random Projections [19.987683989865708]
The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications.
We propose a new perspective to approximate SW by making use of the concentration of measure phenomenon.
Our method does not require sampling a number of random projections, and is therefore both accurate and easy to use compared to the usual Monte Carlo approximation.
arXiv Detail & Related papers (2021-06-29T13:56:19Z) - Non-asymptotic convergence bounds for Wasserstein approximation using
point clouds [0.0]
We show how to generate discrete data as if sampled from a model probability distribution.
We provide explicit upper bounds for the convergence-type algorithm.
arXiv Detail & Related papers (2021-06-15T06:53:08Z) - Depth-based pseudo-metrics between probability distributions [1.1470070927586016]
We propose two new pseudo-metrics between continuous probability measures based on data depth and its associated central regions.
In contrast to the Wasserstein distance, the proposed pseudo-metrics do not suffer from the curse of dimensionality.
The regions-based pseudo-metric appears to be robust w.r.t. both outliers and heavy tails.
arXiv Detail & Related papers (2021-03-23T17:33:18Z) - Variational Transport: A Convergent Particle-BasedAlgorithm for Distributional Optimization [106.70006655990176]
A distributional optimization problem arises widely in machine learning and statistics.
We propose a novel particle-based algorithm, dubbed as variational transport, which approximately performs Wasserstein gradient descent.
We prove that when the objective function satisfies a functional version of the Polyak-Lojasiewicz (PL) (Polyak, 1963) and smoothness conditions, variational transport converges linearly.
arXiv Detail & Related papers (2020-12-21T18:33:13Z) - Augmented Sliced Wasserstein Distances [55.028065567756066]
We propose a new family of distance metrics, called augmented sliced Wasserstein distances (ASWDs)
ASWDs are constructed by first mapping samples to higher-dimensional hypersurfaces parameterized by neural networks.
Numerical results demonstrate that the ASWD significantly outperforms other Wasserstein variants for both synthetic and real-world problems.
arXiv Detail & Related papers (2020-06-15T23:00:08Z) - Projection Robust Wasserstein Distance and Riemannian Optimization [107.93250306339694]
We show that projection robustly solidstein (PRW) is a robust variant of Wasserstein projection (WPP)
This paper provides a first step into the computation of the PRW distance and provides the links between their theory and experiments on and real data.
arXiv Detail & Related papers (2020-06-12T20:40:22Z) - Stochastic Saddle-Point Optimization for Wasserstein Barycenters [69.68068088508505]
We consider the populationimation barycenter problem for random probability measures supported on a finite set of points and generated by an online stream of data.
We employ the structure of the problem and obtain a convex-concave saddle-point reformulation of this problem.
In the setting when the distribution of random probability measures is discrete, we propose an optimization algorithm and estimate its complexity.
arXiv Detail & Related papers (2020-06-11T19:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.