Distributional Sliced-Wasserstein and Applications to Generative
Modeling
- URL: http://arxiv.org/abs/2002.07367v2
- Date: Sun, 4 Oct 2020 07:21:55 GMT
- Title: Distributional Sliced-Wasserstein and Applications to Generative
Modeling
- Authors: Khai Nguyen and Nhat Ho and Tung Pham and Hung Bui
- Abstract summary: Sliced-Wasserstein distance (SW) and its variant, Max Sliced-Wasserstein distance (Max-SW) have been used widely in the recent years.
We propose a novel distance, named Distributional Sliced-Wasserstein distance (DSW)
We show that the DSW is a generalization of Max-SW, and it can be computed efficiently by searching for the optimal push-forward measure.
- Score: 27.014748003733544
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Sliced-Wasserstein distance (SW) and its variant, Max Sliced-Wasserstein
distance (Max-SW), have been used widely in the recent years due to their fast
computation and scalability even when the probability measures lie in a very
high dimensional space. However, SW requires many unnecessary projection
samples to approximate its value while Max-SW only uses the most important
projection, which ignores the information of other useful directions. In order
to account for these weaknesses, we propose a novel distance, named
Distributional Sliced-Wasserstein distance (DSW), that finds an optimal
distribution over projections that can balance between exploring distinctive
projecting directions and the informativeness of projections themselves. We
show that the DSW is a generalization of Max-SW, and it can be computed
efficiently by searching for the optimal push-forward measure over a set of
probability measures over the unit sphere satisfying certain regularizing
constraints that favor distinct directions. Finally, we conduct extensive
experiments with large-scale datasets to demonstrate the favorable performances
of the proposed distances over the previous sliced-based distances in
generative modeling applications.
Related papers
- Sliced Wasserstein with Random-Path Projecting Directions [49.802024788196434]
We propose an optimization-free slicing distribution that provides a fast sampling for the Monte Carlo estimation of expectation.
We derive the random-path slicing distribution (RPSD) and two variants of sliced Wasserstein, i.e., the Random-Path Projection Sliced Wasserstein (RPSW) and the Importance Weighted Random-Path Projection Sliced Wasserstein (IWRPSW)
arXiv Detail & Related papers (2024-01-29T04:59:30Z) - Markovian Sliced Wasserstein Distances: Beyond Independent Projections [51.80527230603978]
We introduce a new family of SW distances, named Markovian sliced Wasserstein (MSW) distance, which imposes a first-order Markov structure on projecting directions.
We compare distances with previous SW variants in various applications such as flows, color transfer, and deep generative modeling to demonstrate the favorable performance of MSW.
arXiv Detail & Related papers (2023-01-10T01:58:15Z) - Hierarchical Sliced Wasserstein Distance [27.12983497199479]
Sliced Wasserstein (SW) distance can be scaled to a large number of supports without suffering from the curse of dimensionality.
Despite its efficiency in the number of supports, estimating the sliced Wasserstein requires a relatively large number of projections in high-dimensional settings.
We propose to derive projections by linearly and randomly combining a smaller number of projections which are named bottleneck projections.
We then formulate the approach into a new metric between measures, named Hierarchical Sliced Wasserstein (HSW) distance.
arXiv Detail & Related papers (2022-09-27T17:46:15Z) - Fast Approximation of the Sliced-Wasserstein Distance Using
Concentration of Random Projections [19.987683989865708]
The Sliced-Wasserstein distance (SW) is being increasingly used in machine learning applications.
We propose a new perspective to approximate SW by making use of the concentration of measure phenomenon.
Our method does not require sampling a number of random projections, and is therefore both accurate and easy to use compared to the usual Monte Carlo approximation.
arXiv Detail & Related papers (2021-06-29T13:56:19Z) - Instance-Optimal Compressed Sensing via Posterior Sampling [101.43899352984774]
We show for Gaussian measurements and emphany prior distribution on the signal, that the posterior sampling estimator achieves near-optimal recovery guarantees.
We implement the posterior sampling estimator for deep generative priors using Langevin dynamics, and empirically find that it produces accurate estimates with more diversity than MAP.
arXiv Detail & Related papers (2021-06-21T22:51:56Z) - On Projection Robust Optimal Transport: Sample Complexity and Model
Misspecification [101.0377583883137]
Projection robust (PR) OT seeks to maximize the OT cost between two measures by choosing a $k$-dimensional subspace onto which they can be projected.
Our first contribution is to establish several fundamental statistical properties of PR Wasserstein distances.
Next, we propose the integral PR Wasserstein (IPRW) distance as an alternative to the PRW distance, by averaging rather than optimizing on subspaces.
arXiv Detail & Related papers (2020-06-22T14:35:33Z) - Augmented Sliced Wasserstein Distances [55.028065567756066]
We propose a new family of distance metrics, called augmented sliced Wasserstein distances (ASWDs)
ASWDs are constructed by first mapping samples to higher-dimensional hypersurfaces parameterized by neural networks.
Numerical results demonstrate that the ASWD significantly outperforms other Wasserstein variants for both synthetic and real-world problems.
arXiv Detail & Related papers (2020-06-15T23:00:08Z) - Projection Robust Wasserstein Distance and Riemannian Optimization [107.93250306339694]
We show that projection robustly solidstein (PRW) is a robust variant of Wasserstein projection (WPP)
This paper provides a first step into the computation of the PRW distance and provides the links between their theory and experiments on and real data.
arXiv Detail & Related papers (2020-06-12T20:40:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.