Multilinear Latent Conditioning for Generating Unseen Attribute
Combinations
- URL: http://arxiv.org/abs/2009.04075v1
- Date: Wed, 9 Sep 2020 02:23:13 GMT
- Title: Multilinear Latent Conditioning for Generating Unseen Attribute
Combinations
- Authors: Markos Georgopoulos, Grigorios Chrysos, Maja Pantic, Yannis Panagakis
- Abstract summary: We show that variational autoencoders (VAE) and generative adversarial networks (GAN) lack the generalization ability that occurs naturally in human perception.
We introduce a multilinear latent conditioning framework that captures the multiplicative interactions between attributes.
Altogether, we design a novel conditioning framework that can be used with any architecture to synthesize unseen attribute combinations.
- Score: 61.686839319971334
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep generative models rely on their inductive bias to facilitate
generalization, especially for problems with high dimensional data, like
images. However, empirical studies have shown that variational autoencoders
(VAE) and generative adversarial networks (GAN) lack the generalization ability
that occurs naturally in human perception. For example, humans can visualize a
woman smiling after only seeing a smiling man. On the contrary, the standard
conditional VAE (cVAE) is unable to generate unseen attribute combinations. To
this end, we extend cVAE by introducing a multilinear latent conditioning
framework that captures the multiplicative interactions between the attributes.
We implement two variants of our model and demonstrate their efficacy on MNIST,
Fashion-MNIST and CelebA. Altogether, we design a novel conditioning framework
that can be used with any architecture to synthesize unseen attribute
combinations.
Related papers
- Distributionally Generative Augmentation for Fair Facial Attribute Classification [69.97710556164698]
Facial Attribute Classification (FAC) holds substantial promise in widespread applications.
FAC models trained by traditional methodologies can be unfair by exhibiting accuracy inconsistencies across varied data subpopulations.
This work proposes a novel, generation-based two-stage framework to train a fair FAC model on biased data without additional annotation.
arXiv Detail & Related papers (2024-03-11T10:50:53Z) - Curve Your Enthusiasm: Concurvity Regularization in Differentiable
Generalized Additive Models [5.519653885553456]
Generalized Additive Models (GAMs) have recently experienced a resurgence in popularity due to their interpretability.
We show how concurvity can severly impair the interpretability of GAMs.
We propose a remedy: a conceptually simple, yet effective regularizer which penalizes pairwise correlations of the non-linearly transformed feature variables.
arXiv Detail & Related papers (2023-05-19T06:55:49Z) - StyleGenes: Discrete and Efficient Latent Distributions for GANs [149.0290830305808]
We propose a discrete latent distribution for Generative Adversarial Networks (GANs)
Instead of drawing latent vectors from a continuous prior, we sample from a finite set of learnable latents.
We take inspiration from the encoding of information in biological organisms.
arXiv Detail & Related papers (2023-04-30T23:28:46Z) - Mutual Exclusivity Training and Primitive Augmentation to Induce
Compositionality [84.94877848357896]
Recent datasets expose the lack of the systematic generalization ability in standard sequence-to-sequence models.
We analyze this behavior of seq2seq models and identify two contributing factors: a lack of mutual exclusivity bias and the tendency to memorize whole examples.
We show substantial empirical improvements using standard sequence-to-sequence models on two widely-used compositionality datasets.
arXiv Detail & Related papers (2022-11-28T17:36:41Z) - Diversity vs. Recognizability: Human-like generalization in one-shot
generative models [5.964436882344729]
We propose a new framework to evaluate one-shot generative models along two axes: sample recognizability vs. diversity.
We first show that GAN-like and VAE-like models fall on opposite ends of the diversity-recognizability space.
In contrast, disentanglement transports the model along a parabolic curve that could be used to maximize recognizability.
arXiv Detail & Related papers (2022-05-20T13:17:08Z) - Defending Variational Autoencoders from Adversarial Attacks with MCMC [74.36233246536459]
Variational autoencoders (VAEs) are deep generative models used in various domains.
As previous work has shown, one can easily fool VAEs to produce unexpected latent representations and reconstructions for a visually slightly modified input.
Here, we examine several objective functions for adversarial attacks construction, suggest metrics assess the model robustness, and propose a solution.
arXiv Detail & Related papers (2022-03-18T13:25:18Z) - Controllable and Compositional Generation with Latent-Space Energy-Based
Models [60.87740144816278]
Controllable generation is one of the key requirements for successful adoption of deep generative models in real-world applications.
In this work, we use energy-based models (EBMs) to handle compositional generation over a set of attributes.
By composing energy functions with logical operators, this work is the first to achieve such compositionality in generating photo-realistic images of resolution 1024x1024.
arXiv Detail & Related papers (2021-10-21T03:31:45Z) - PluGeN: Multi-Label Conditional Generation From Pre-Trained Models [1.4777718769290524]
PluGeN is a simple yet effective generative technique that can be used as a plugin to pre-trained generative models.
We show that PluGeN preserves the quality of backbone models while adding the ability to control the values of labeled attributes.
arXiv Detail & Related papers (2021-09-18T21:02:24Z) - Directly Training Joint Energy-Based Models for Conditional Synthesis
and Calibrated Prediction of Multi-Attribute Data [9.389098132764431]
We show that architectures for multi-attribute prediction can be reinterpreted as energy-based models.
We propose a simple extension which expands the capabilities of EBMs to generate accurate conditional samples.
We find our models are capable of both accurate, calibrated predictions and high-quality conditional synthesis of novel attribute combinations.
arXiv Detail & Related papers (2021-07-19T22:19:41Z) - AVAE: Adversarial Variational Auto Encoder [2.1485350418225244]
We introduce a new framework that combines VAE and GAN in a novel and complementary way to produce an auto-encoding model.
We evaluate our approach both qualitatively and quantitatively on five image datasets.
arXiv Detail & Related papers (2020-12-21T18:29:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.