Toward a Visual Concept Vocabulary for GAN Latent Space
- URL: http://arxiv.org/abs/2110.04292v1
- Date: Fri, 8 Oct 2021 17:58:19 GMT
- Title: Toward a Visual Concept Vocabulary for GAN Latent Space
- Authors: Sarah Schwettmann, Evan Hernandez, David Bau, Samuel Klein, Jacob
Andreas, Antonio Torralba
- Abstract summary: This paper introduces a new method for building open-ended vocabularies of primitive visual concepts represented in a GAN's latent space.
Our approach is built from three components: automatic identification of perceptually salient directions based on their layer selectivity; human annotation of these directions with free-form, compositional natural language descriptions.
Experiments show that concepts learned with our approach are reliable and composable -- generalizing across classes, contexts, and observers.
- Score: 74.12447538049537
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: A large body of recent work has identified transformations in the latent
spaces of generative adversarial networks (GANs) that consistently and
interpretably transform generated images. But existing techniques for
identifying these transformations rely on either a fixed vocabulary of
pre-specified visual concepts, or on unsupervised disentanglement techniques
whose alignment with human judgments about perceptual salience is unknown. This
paper introduces a new method for building open-ended vocabularies of primitive
visual concepts represented in a GAN's latent space. Our approach is built from
three components: (1) automatic identification of perceptually salient
directions based on their layer selectivity; (2) human annotation of these
directions with free-form, compositional natural language descriptions; and (3)
decomposition of these annotations into a visual concept vocabulary, consisting
of distilled directions labeled with single words. Experiments show that
concepts learned with our approach are reliable and composable -- generalizing
across classes, contexts, and observers, and enabling fine-grained manipulation
of image style and content.
Related papers
- CusConcept: Customized Visual Concept Decomposition with Diffusion Models [13.95568624067449]
We propose a two-stage framework, CusConcept, to extract customized visual concept embedding vectors.
In the first stage, CusConcept employs a vocabularies-guided concept decomposition mechanism.
In the second stage, joint concept refinement is performed to enhance the fidelity and quality of generated images.
arXiv Detail & Related papers (2024-10-01T04:41:44Z) - Non-confusing Generation of Customized Concepts in Diffusion Models [135.4385383284657]
We tackle the common challenge of inter-concept visual confusion in compositional concept generation using text-guided diffusion models (TGDMs)
Existing customized generation methods only focus on fine-tuning the second stage while overlooking the first one.
We propose a simple yet effective solution called CLIF: contrastive image-language fine-tuning.
arXiv Detail & Related papers (2024-05-11T05:01:53Z) - Learning Pseudo-Labeler beyond Noun Concepts for Open-Vocabulary Object
Detection [25.719940401040205]
We propose a simple yet effective method to learn region-text alignment for arbitrary concepts.
Specifically, the proposed method aims to learn arbitrary image-to-text mapping for pseudo-labeling of arbitrary concepts, named Pseudo-Labeling for Arbitrary Concepts (PLAC)
The proposed method shows competitive performance on the standard OVOD benchmark for noun concepts and a large improvement on referring expression comprehension benchmark for arbitrary concepts.
arXiv Detail & Related papers (2023-12-04T18:29:03Z) - Lego: Learning to Disentangle and Invert Personalized Concepts Beyond Object Appearance in Text-to-Image Diffusion Models [60.80960965051388]
Adjectives and verbs are entangled with nouns (subject)
Lego disentangles concepts from their associated subjects using a simple yet effective Subject Separation step.
Lego-generated concepts were preferred over 70% of the time when compared to the baseline.
arXiv Detail & Related papers (2023-11-23T07:33:38Z) - Text-to-Image Generation for Abstract Concepts [76.32278151607763]
We propose a framework of Text-to-Image generation for Abstract Concepts (TIAC)
The abstract concept is clarified into a clear intent with a detailed definition to avoid ambiguity.
The concept-dependent form is retrieved from an LLM-extracted form pattern set.
arXiv Detail & Related papers (2023-09-26T02:22:39Z) - Concept Decomposition for Visual Exploration and Inspiration [53.06983340652571]
We propose a method to decompose a visual concept into different visual aspects encoded in a hierarchical tree structure.
We utilize large vision-language models and their rich latent space for concept decomposition and generation.
arXiv Detail & Related papers (2023-05-29T16:56:56Z) - Visual Superordinate Abstraction for Robust Concept Learning [80.15940996821541]
Concept learning constructs visual representations that are connected to linguistic semantics.
We ascribe the bottleneck to a failure of exploring the intrinsic semantic hierarchy of visual concepts.
We propose a visual superordinate abstraction framework for explicitly modeling semantic-aware visual subspaces.
arXiv Detail & Related papers (2022-05-28T14:27:38Z) - Building a visual semantics aware object hierarchy [0.0]
We propose a novel unsupervised method to build visual semantics aware object hierarchy.
Our intuition in this paper comes from real-world knowledge representation where concepts are hierarchically organized.
The evaluation consists of two parts, firstly we apply the constructed hierarchy on the object recognition task and then we compare our visual hierarchy and existing lexical hierarchies to show the validity of our method.
arXiv Detail & Related papers (2022-02-26T00:10:21Z) - Interactive Disentanglement: Learning Concepts by Interacting with their
Prototype Representations [15.284688801788912]
We show the advantages of prototype representations for understanding and revising the latent space of neural concept learners.
For this purpose, we introduce interactive Concept Swapping Networks (iCSNs)
iCSNs learn to bind conceptual information to specific prototype slots by swapping the latent representations of paired images.
arXiv Detail & Related papers (2021-12-04T09:25:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.