StyleGenes: Discrete and Efficient Latent Distributions for GANs
- URL: http://arxiv.org/abs/2305.00599v1
- Date: Sun, 30 Apr 2023 23:28:46 GMT
- Title: StyleGenes: Discrete and Efficient Latent Distributions for GANs
- Authors: Evangelos Ntavelis, Mohamad Shahbazi, Iason Kastanis, Radu Timofte,
Martin Danelljan, Luc Van Gool
- Abstract summary: We propose a discrete latent distribution for Generative Adversarial Networks (GANs)
Instead of drawing latent vectors from a continuous prior, we sample from a finite set of learnable latents.
We take inspiration from the encoding of information in biological organisms.
- Score: 149.0290830305808
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We propose a discrete latent distribution for Generative Adversarial Networks
(GANs). Instead of drawing latent vectors from a continuous prior, we sample
from a finite set of learnable latents. However, a direct parametrization of
such a distribution leads to an intractable linear increase in memory in order
to ensure sufficient sample diversity. We address this key issue by taking
inspiration from the encoding of information in biological organisms. Instead
of learning a separate latent vector for each sample, we split the latent space
into a set of genes. For each gene, we train a small bank of gene variants.
Thus, by independently sampling a variant for each gene and combining them into
the final latent vector, our approach can represent a vast number of unique
latent samples from a compact set of learnable parameters. Interestingly, our
gene-inspired latent encoding allows for new and intuitive approaches to
latent-space exploration, enabling conditional sampling from our
unconditionally trained model. Moreover, our approach preserves
state-of-the-art photo-realism while achieving better disentanglement than the
widely-used StyleMapping network.
Related papers
- Generator Born from Classifier [66.56001246096002]
We aim to reconstruct an image generator, without relying on any data samples.
We propose a novel learning paradigm, in which the generator is trained to ensure that the convergence conditions of the network parameters are satisfied.
arXiv Detail & Related papers (2023-12-05T03:41:17Z) - Diverse Human Motion Prediction via Gumbel-Softmax Sampling from an
Auxiliary Space [34.83587750498361]
Diverse human motion prediction aims at predicting multiple possible future pose sequences from a sequence of observed poses.
Previous approaches usually employ deep generative networks to model the conditional distribution of data, and then randomly sample outcomes from the distribution.
We propose a novel sampling strategy for sampling very diverse results from an imbalanced multimodal distribution.
arXiv Detail & Related papers (2022-07-15T09:03:57Z) - Multi-segment preserving sampling for deep manifold sampler [40.88321000839884]
Multi-segment preserving sampling enables the direct inclusion of domain-specific knowledge.
We train two models: a deep manifold sampler and a GPT-2 language model on nearly six million heavy chain sequences annotated with the IGHV1-18 gene.
We obtain log probability scores from a GPT-2 model for each sampled CDR3 and demonstrate that multi-segment preserving sampling generates reasonable designs.
arXiv Detail & Related papers (2022-05-09T13:19:41Z) - Multi-level Latent Space Structuring for Generative Control [53.240701050423155]
We propose to leverage the StyleGAN generative architecture to devise a new truncation technique.
We do so by learning to re-generate W-space, the extended intermediate latent space of StyleGAN, using a learnable mixture of Gaussians.
The resulting truncation scheme is more faithful to the original untruncated samples and allows a better trade-off between quality and diversity.
arXiv Detail & Related papers (2022-02-11T21:26:17Z) - GANs with Variational Entropy Regularizers: Applications in Mitigating
the Mode-Collapse Issue [95.23775347605923]
Building on the success of deep learning, Generative Adversarial Networks (GANs) provide a modern approach to learn a probability distribution from observed samples.
GANs often suffer from the mode collapse issue where the generator fails to capture all existing modes of the input distribution.
We take an information-theoretic approach and maximize a variational lower bound on the entropy of the generated samples to increase their diversity.
arXiv Detail & Related papers (2020-09-24T19:34:37Z) - Generative Model without Prior Distribution Matching [26.91643368299913]
Variational Autoencoder (VAE) and its variations are classic generative models by learning a low-dimensional latent representation to satisfy some prior distribution.
We propose to let the prior match the embedding distribution rather than imposing the latent variables to fit the prior.
arXiv Detail & Related papers (2020-09-23T09:33:24Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - Relaxed-Responsibility Hierarchical Discrete VAEs [3.976291254896486]
We introduce textitRelaxed-Responsibility Vector-Quantisation, a novel way to parameterise discrete latent variables.
We achieve state-of-the-art bits-per-dim results for various standard datasets.
arXiv Detail & Related papers (2020-07-14T19:10:05Z) - Closed-Form Factorization of Latent Semantics in GANs [65.42778970898534]
A rich set of interpretable dimensions has been shown to emerge in the latent space of the Generative Adversarial Networks (GANs) trained for synthesizing images.
In this work, we examine the internal representation learned by GANs to reveal the underlying variation factors in an unsupervised manner.
We propose a closed-form factorization algorithm for latent semantic discovery by directly decomposing the pre-trained weights.
arXiv Detail & Related papers (2020-07-13T18:05:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.