Adversarial network training using higher-order moments in a modified
Wasserstein distance
- URL: http://arxiv.org/abs/2210.03354v1
- Date: Fri, 7 Oct 2022 06:56:44 GMT
- Title: Adversarial network training using higher-order moments in a modified
Wasserstein distance
- Authors: Oliver Serang
- Abstract summary: Generative-adversarial networks (GANs) have been used to produce data closely resembling example data in a compressed, latent space.
The Wasserstein metric has been used as an alternative to binary cross-entropy, producing more numerically stable GANs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Generative-adversarial networks (GANs) have been used to produce data closely
resembling example data in a compressed, latent space that is close to
sufficient for reconstruction in the original vector space. The Wasserstein
metric has been used as an alternative to binary cross-entropy, producing more
numerically stable GANs with greater mode covering behavior. Here, a
generalization of the Wasserstein distance, using higher-order moments than the
mean, is derived. Training a GAN with this higher-order Wasserstein metric is
demonstrated to exhibit superior performance, even when adjusted for slightly
higher computational cost. This is illustrated generating synthetic antibody
sequences.
Related papers
- Fast Estimation of Wasserstein Distances via Regression on Sliced Wasserstein Distances [70.94157767200342]
We propose a fast estimation method based on regressing Wasserstein distance on sliced Wasserstein distances.<n>We show that accurate models can be learned from a small number of distribution pairs.<n>Our method consistently provides a better approximation of Wasserstein distance than the state-of-the-art Wasserstein embedding model, Wasserstein Wormhole.
arXiv Detail & Related papers (2025-09-24T19:30:53Z) - Deep Generative Symbolic Regression [83.04219479605801]
Symbolic regression aims to discover concise closed-form mathematical equations from data.
Existing methods, ranging from search to reinforcement learning, fail to scale with the number of input variables.
We propose an instantiation of our framework, Deep Generative Symbolic Regression.
arXiv Detail & Related papers (2023-12-30T17:05:31Z) - Adversarial Likelihood Estimation With One-Way Flows [44.684952377918904]
Generative Adversarial Networks (GANs) can produce high-quality samples, but do not provide an estimate of the probability density around the samples.
We show that our method converges faster, produces comparable sample quality to GANs with similar architecture, successfully avoids over-fitting to commonly used datasets and produces smooth low-dimensional latent representations of the training data.
arXiv Detail & Related papers (2023-07-19T10:26:29Z) - LD-GAN: Low-Dimensional Generative Adversarial Network for Spectral
Image Generation with Variance Regularization [72.4394510913927]
Deep learning methods are state-of-the-art for spectral image (SI) computational tasks.
GANs enable diverse augmentation by learning and sampling from the data distribution.
GAN-based SI generation is challenging since the high-dimensionality nature of this kind of data hinders the convergence of the GAN training yielding to suboptimal generation.
We propose a statistical regularization to control the low-dimensional representation variance for the autoencoder training and to achieve high diversity of samples generated with the GAN.
arXiv Detail & Related papers (2023-04-29T00:25:02Z) - Hierarchical Sliced Wasserstein Distance [27.12983497199479]
Sliced Wasserstein (SW) distance can be scaled to a large number of supports without suffering from the curse of dimensionality.
Despite its efficiency in the number of supports, estimating the sliced Wasserstein requires a relatively large number of projections in high-dimensional settings.
We propose to derive projections by linearly and randomly combining a smaller number of projections which are named bottleneck projections.
We then formulate the approach into a new metric between measures, named Hierarchical Sliced Wasserstein (HSW) distance.
arXiv Detail & Related papers (2022-09-27T17:46:15Z) - Wasserstein Iterative Networks for Barycenter Estimation [80.23810439485078]
We present an algorithm to approximate the Wasserstein-2 barycenters of continuous measures via a generative model.
Based on the celebrity faces dataset, we construct Ave, celeba! dataset which can be used for quantitative evaluation of barycenter algorithms.
arXiv Detail & Related papers (2022-01-28T16:59:47Z) - Understanding Entropic Regularization in GANs [5.448283690603358]
We study the influence of regularization on the learned solution of Wasserstein distance.
We show that entropy regularization promotes the solution sparsification, while replacing the Wasserstein distance with the Sinkhorn divergence recovers the unregularized solution.
We conclude that these regularization techniques can improve the quality of the generator learned from empirical data for a large class of distributions.
arXiv Detail & Related papers (2021-11-02T06:08:16Z) - Conditional Versus Adversarial Euler-based Generators For Time Series [2.2344764434954256]
We introduce new generative models for time series based on Euler discretization.
Tests show how the Euler discretization and the use of Wasserstein distance allow the proposed GANs and (more considerably) CEGEN to outperform state-of-the-art Time Series GAN generation.
arXiv Detail & Related papers (2021-02-10T08:18:35Z) - Augmented Sliced Wasserstein Distances [55.028065567756066]
We propose a new family of distance metrics, called augmented sliced Wasserstein distances (ASWDs)
ASWDs are constructed by first mapping samples to higher-dimensional hypersurfaces parameterized by neural networks.
Numerical results demonstrate that the ASWD significantly outperforms other Wasserstein variants for both synthetic and real-world problems.
arXiv Detail & Related papers (2020-06-15T23:00:08Z) - Discriminator Contrastive Divergence: Semi-Amortized Generative Modeling
by Exploring Energy of the Discriminator [85.68825725223873]
Generative Adversarial Networks (GANs) have shown great promise in modeling high dimensional data.
We introduce the Discriminator Contrastive Divergence, which is well motivated by the property of WGAN's discriminator.
We demonstrate the benefits of significant improved generation on both synthetic data and several real-world image generation benchmarks.
arXiv Detail & Related papers (2020-04-05T01:50:16Z) - Nested-Wasserstein Self-Imitation Learning for Sequence Generation [158.19606942252284]
We propose the concept of nested-Wasserstein distance for distributional semantic matching.
A novel nested-Wasserstein self-imitation learning framework is developed, encouraging the model to exploit historical high-rewarded sequences.
arXiv Detail & Related papers (2020-01-20T02:19:13Z) - Max-Sliced Wasserstein Distance and its use for GANs [55.09958914575673]
Generative adversarial nets (GANs) and variational auto-encoders have significantly improved our distribution modeling capabilities.<n>We show that the sample complexity of the distance metrics remains one of the factors affecting GAN training.<n>We show that a proposed distance trains GANs on high-dimensional images up to a resolution of 256x256 easily.
arXiv Detail & Related papers (2019-04-11T17:59:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.