Generative models with kernel distance in data space
- URL: http://arxiv.org/abs/2009.07327v1
- Date: Tue, 15 Sep 2020 19:11:47 GMT
- Title: Generative models with kernel distance in data space
- Authors: Szymon Knop, Marcin Mazur, Przemys{\l}aw Spurek, Jacek Tabor, Igor
Podolak
- Abstract summary: LCW generator resembles a classical GAN in transforming Gaussian noise into data space.
First, an autoencoder based architecture, using kernel measures, is built to model a manifold of data.
We propose a Latent Trick mapping a Gaussian to latent in order to get the final model.
- Score: 10.002379593718471
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models dealing with modeling a~joint data distribution are
generally either autoencoder or GAN based. Both have their pros and cons,
generating blurry images or being unstable in training or prone to mode
collapse phenomenon, respectively. The objective of this paper is to construct
a~model situated between above architectures, one that does not inherit their
main weaknesses. The proposed LCW generator (Latent Cramer-Wold generator)
resembles a classical GAN in transforming Gaussian noise into data space. What
is of utmost importance, instead of a~discriminator, LCW generator uses kernel
distance. No adversarial training is utilized, hence the name generator. It is
trained in two phases. First, an autoencoder based architecture, using kernel
measures, is built to model a manifold of data. We propose a Latent Trick
mapping a Gaussian to latent in order to get the final model. This results in
very competitive FID values.
Related papers
- IFH: a Diffusion Framework for Flexible Design of Graph Generative Models [53.219279193440734]
Graph generative models can be classified into two prominent families: one-shot models, which generate a graph in one go, and sequential models, which generate a graph by successive additions of nodes and edges.
This paper proposes a graph generative model, called Insert-Fill-Halt (IFH), that supports the specification of a sequentiality degree.
arXiv Detail & Related papers (2024-08-23T16:24:40Z) - Promises and Pitfalls of Generative Masked Language Modeling: Theoretical Framework and Practical Guidelines [74.42485647685272]
We focus on Generative Masked Language Models (GMLMs)
We train a model to fit conditional probabilities of the data distribution via masking, which are subsequently used as inputs to a Markov Chain to draw samples from the model.
We adapt the T5 model for iteratively-refined parallel decoding, achieving 2-3x speedup in machine translation with minimal sacrifice in quality.
arXiv Detail & Related papers (2024-07-22T18:00:00Z) - Ref-Diff: Zero-shot Referring Image Segmentation with Generative Models [68.73086826874733]
We introduce a novel Referring Diffusional segmentor (Ref-Diff) for referring image segmentation.
We demonstrate that without a proposal generator, a generative model alone can achieve comparable performance to existing SOTA weakly-supervised models.
This indicates that generative models are also beneficial for this task and can complement discriminative models for better referring segmentation.
arXiv Detail & Related papers (2023-08-31T14:55:30Z) - Complexity Matters: Rethinking the Latent Space for Generative Modeling [65.64763873078114]
In generative modeling, numerous successful approaches leverage a low-dimensional latent space, e.g., Stable Diffusion.
In this study, we aim to shed light on this under-explored topic by rethinking the latent space from the perspective of model complexity.
arXiv Detail & Related papers (2023-07-17T07:12:29Z) - Shared Loss between Generators of GANs [7.33811357166334]
Generative adversarial networks are generative models that are capable of replicating the implicit probability distribution of the input data with high accuracy.
Traditionally, GANs consist of a Generator and a Discriminator which interact with each other to produce highly realistic artificial data.
We show that this causes a dramatic reduction in the training time for GANs without affecting its performance.
arXiv Detail & Related papers (2022-11-14T09:47:42Z) - Towards a scalable discrete quantum generative adversarial neural
network [8.134122612459633]
We introduce a fully quantum generative adversarial network intended for use with binary data.
In particular, we incorporate noise reuploading in the generator, auxiliary qubits in the discriminator to enhance expressivity.
We empirically demonstrate the expressive power of our model on both synthetic data and low energy states of an Ising model.
arXiv Detail & Related papers (2022-09-28T10:42:38Z) - uGLAD: Sparse graph recovery by optimizing deep unrolled networks [11.48281545083889]
We present a novel technique to perform sparse graph recovery by optimizing deep unrolled networks.
Our model, uGLAD, builds upon and extends the state-of-the-art model GLAD to the unsupervised setting.
We evaluate model results on synthetic Gaussian data, non-Gaussian data generated from Gene Regulatory Networks, and present a case study in anaerobic digestion.
arXiv Detail & Related papers (2022-05-23T20:20:27Z) - Self-Conditioned Generative Adversarial Networks for Image Editing [61.50205580051405]
Generative Adversarial Networks (GANs) are susceptible to bias, learned from either the unbalanced data, or through mode collapse.
We argue that this bias is responsible not only for fairness concerns, but that it plays a key role in the collapse of latent-traversal editing methods when deviating away from the distribution's core.
arXiv Detail & Related papers (2022-02-08T18:08:24Z) - Riemannian Score-Based Generative Modeling [56.20669989459281]
We introduce score-based generative models (SGMs) demonstrating remarkable empirical performance.
Current SGMs make the underlying assumption that the data is supported on a Euclidean manifold with flat geometry.
This prevents the use of these models for applications in robotics, geoscience or protein modeling.
arXiv Detail & Related papers (2022-02-06T11:57:39Z) - Toward Spatially Unbiased Generative Models [19.269719158344508]
Recent image generation models show remarkable generation performance.
However, they mirror strong location preference in datasets, which we call spatial bias.
We argue that the generators rely on their implicit positional encoding to render spatial content.
arXiv Detail & Related papers (2021-08-03T04:13:03Z) - End-to-end Sinkhorn Autoencoder with Noise Generator [10.008055997630304]
We propose a novel end-to-end sinkhorn autoencoder with noise generator for efficient data collection simulation.
Our method outperforms competing approaches on a challenging dataset of simulation data from Zero Degree Calorimeters of ALICE experiment in LHC.
arXiv Detail & Related papers (2020-06-11T18:04:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.