Toward a Visual Concept Vocabulary for GAN Latent Space
- URL: http://arxiv.org/abs/2110.04292v1
- Date: Fri, 8 Oct 2021 17:58:19 GMT
- Title: Toward a Visual Concept Vocabulary for GAN Latent Space
- Authors: Sarah Schwettmann, Evan Hernandez, David Bau, Samuel Klein, Jacob
Andreas, Antonio Torralba
- Abstract summary: This paper introduces a new method for building open-ended vocabularies of primitive visual concepts represented in a GAN's latent space.
Our approach is built from three components: automatic identification of perceptually salient directions based on their layer selectivity; human annotation of these directions with free-form, compositional natural language descriptions.
Experiments show that concepts learned with our approach are reliable and composable -- generalizing across classes, contexts, and observers.
- Score: 74.12447538049537
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: A large body of recent work has identified transformations in the latent
spaces of generative adversarial networks (GANs) that consistently and
interpretably transform generated images. But existing techniques for
identifying these transformations rely on either a fixed vocabulary of
pre-specified visual concepts, or on unsupervised disentanglement techniques
whose alignment with human judgments about perceptual salience is unknown. This
paper introduces a new method for building open-ended vocabularies of primitive
visual concepts represented in a GAN's latent space. Our approach is built from
three components: (1) automatic identification of perceptually salient
directions based on their layer selectivity; (2) human annotation of these
directions with free-form, compositional natural language descriptions; and (3)
decomposition of these annotations into a visual concept vocabulary, consisting
of distilled directions labeled with single words. Experiments show that
concepts learned with our approach are reliable and composable -- generalizing
across classes, contexts, and observers, and enabling fine-grained manipulation
of image style and content.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.