Max-Sliced Wasserstein Distance and its use for GANs
- URL: http://arxiv.org/abs/1904.05877v2
- Date: Tue, 30 Sep 2025 00:25:53 GMT
- Title: Max-Sliced Wasserstein Distance and its use for GANs
- Authors: Ishan Deshpande, Yuan-Ting Hu, Ruoyu Sun, Ayis Pyrros, Nasir Siddiqui, Sanmi Koyejo, Zhizhen Zhao, David Forsyth, Alexander Schwing,
- Abstract summary: Generative adversarial nets (GANs) and variational auto-encoders have significantly improved our distribution modeling capabilities.<n>We show that the sample complexity of the distance metrics remains one of the factors affecting GAN training.<n>We show that a proposed distance trains GANs on high-dimensional images up to a resolution of 256x256 easily.
- Score: 55.09958914575673
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial nets (GANs) and variational auto-encoders have significantly improved our distribution modeling capabilities, showing promise for dataset augmentation, image-to-image translation and feature learning. However, to model high-dimensional distributions, sequential training and stacked architectures are common, increasing the number of tunable hyper-parameters as well as the training time. Nonetheless, the sample complexity of the distance metrics remains one of the factors affecting GAN training. We first show that the recently proposed sliced Wasserstein distance has compelling sample complexity properties when compared to the Wasserstein distance. To further improve the sliced Wasserstein distance we then analyze its `projection complexity' and develop the max-sliced Wasserstein distance which enjoys compelling sample complexity while reducing projection complexity, albeit necessitating a max estimation. We finally illustrate that the proposed distance trains GANs on high-dimensional images up to a resolution of 256x256 easily.
Related papers
- Fast Estimation of Wasserstein Distances via Regression on Sliced Wasserstein Distances [70.94157767200342]
We propose a fast estimation method based on regressing Wasserstein distance on sliced Wasserstein distances.<n>We show that accurate models can be learned from a small number of distribution pairs.<n>Our method consistently provides a better approximation of Wasserstein distance than the state-of-the-art Wasserstein embedding model, Wasserstein Wormhole.
arXiv Detail & Related papers (2025-09-24T19:30:53Z) - Linearized Wasserstein dimensionality reduction with approximation
guarantees [65.16758672591365]
LOT Wassmap is a computationally feasible algorithm to uncover low-dimensional structures in the Wasserstein space.
We show that LOT Wassmap attains correct embeddings and that the quality improves with increased sample size.
We also show how LOT Wassmap significantly reduces the computational cost when compared to algorithms that depend on pairwise distance computations.
arXiv Detail & Related papers (2023-02-14T22:12:16Z) - Adversarial network training using higher-order moments in a modified
Wasserstein distance [0.0]
Generative-adversarial networks (GANs) have been used to produce data closely resembling example data in a compressed, latent space.
The Wasserstein metric has been used as an alternative to binary cross-entropy, producing more numerically stable GANs.
arXiv Detail & Related papers (2022-10-07T06:56:44Z) - Hierarchical Sliced Wasserstein Distance [27.12983497199479]
Sliced Wasserstein (SW) distance can be scaled to a large number of supports without suffering from the curse of dimensionality.
Despite its efficiency in the number of supports, estimating the sliced Wasserstein requires a relatively large number of projections in high-dimensional settings.
We propose to derive projections by linearly and randomly combining a smaller number of projections which are named bottleneck projections.
We then formulate the approach into a new metric between measures, named Hierarchical Sliced Wasserstein (HSW) distance.
arXiv Detail & Related papers (2022-09-27T17:46:15Z) - Learning High Dimensional Wasserstein Geodesics [55.086626708837635]
We propose a new formulation and learning strategy for computing the Wasserstein geodesic between two probability distributions in high dimensions.
By applying the method of Lagrange multipliers to the dynamic formulation of the optimal transport (OT) problem, we derive a minimax problem whose saddle point is the Wasserstein geodesic.
We then parametrize the functions by deep neural networks and design a sample based bidirectional learning algorithm for training.
arXiv Detail & Related papers (2021-02-05T04:25:28Z) - Two-sample Test using Projected Wasserstein Distance [18.46110328123008]
We develop a projected Wasserstein distance for the two-sample test, a fundamental problem in statistics and machine learning.
A key contribution is to couple optimal projection to find the low dimensional linear mapping to maximize the Wasserstein distance between projected probability distributions.
arXiv Detail & Related papers (2020-10-22T18:08:58Z) - On Projection Robust Optimal Transport: Sample Complexity and Model
Misspecification [101.0377583883137]
Projection robust (PR) OT seeks to maximize the OT cost between two measures by choosing a $k$-dimensional subspace onto which they can be projected.
Our first contribution is to establish several fundamental statistical properties of PR Wasserstein distances.
Next, we propose the integral PR Wasserstein (IPRW) distance as an alternative to the PRW distance, by averaging rather than optimizing on subspaces.
arXiv Detail & Related papers (2020-06-22T14:35:33Z) - Augmented Sliced Wasserstein Distances [55.028065567756066]
We propose a new family of distance metrics, called augmented sliced Wasserstein distances (ASWDs)
ASWDs are constructed by first mapping samples to higher-dimensional hypersurfaces parameterized by neural networks.
Numerical results demonstrate that the ASWD significantly outperforms other Wasserstein variants for both synthetic and real-world problems.
arXiv Detail & Related papers (2020-06-15T23:00:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.