Concept Algebra for (Score-Based) Text-Controlled Generative Models
- URL: http://arxiv.org/abs/2302.03693v6
- Date: Thu, 8 Feb 2024 02:43:08 GMT
- Title: Concept Algebra for (Score-Based) Text-Controlled Generative Models
- Authors: Zihao Wang, Lin Gui, Jeffrey Negrea, Victor Veitch
- Abstract summary: This paper concerns the structure of learned representations in text-guided generative models.
A key property of such models is that they can compose disparate concepts in a disentangled' manner.
Here, we focus on the idea that concepts are encoded as subspaces of some representation space.
- Score: 27.725860408234478
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper concerns the structure of learned representations in text-guided
generative models, focusing on score-based models. A key property of such
models is that they can compose disparate concepts in a `disentangled' manner.
This suggests these models have internal representations that encode concepts
in a `disentangled' manner. Here, we focus on the idea that concepts are
encoded as subspaces of some representation space. We formalize what this
means, show there's a natural choice for the representation, and develop a
simple method for identifying the part of the representation corresponding to a
given concept. In particular, this allows us to manipulate the concepts
expressed by the model through algebraic manipulation of the representation. We
demonstrate the idea with examples using Stable Diffusion. Code in
https://github.com/zihao12/concept-algebra-code
Related papers
- Scaling Concept With Text-Guided Diffusion Models [53.80799139331966]
Instead of replacing a concept, can we enhance or suppress the concept itself?
We introduce ScalingConcept, a simple yet effective method to scale decomposed concepts up or down in real input without introducing new elements.
More importantly, ScalingConcept enables a variety of novel zero-shot applications across image and audio domains.
arXiv Detail & Related papers (2024-10-31T17:09:55Z) - The Geometry of Categorical and Hierarchical Concepts in Large Language Models [15.126806053878855]
We show how to extend the formalization of the linear representation hypothesis to represent features (e.g., is_animal) as vectors.
We use the formalization to prove a relationship between the hierarchical structure of concepts and the geometry of their representations.
We validate these theoretical results on the Gemma and LLaMA-3 large language models, estimating representations for 900+ hierarchically related concepts using data from WordNet.
arXiv Detail & Related papers (2024-06-03T16:34:01Z) - On the Origins of Linear Representations in Large Language Models [51.88404605700344]
We introduce a simple latent variable model to formalize the concept dynamics of the next token prediction.
Experiments show that linear representations emerge when learning from data matching the latent variable model.
We additionally confirm some predictions of the theory using the LLaMA-2 large language model.
arXiv Detail & Related papers (2024-03-06T17:17:36Z) - An Axiomatic Approach to Model-Agnostic Concept Explanations [67.84000759813435]
We propose an approach to concept explanations that satisfy three natural axioms: linearity, recursivity, and similarity.
We then establish connections with previous concept explanation methods, offering insight into their varying semantic meanings.
arXiv Detail & Related papers (2024-01-12T20:53:35Z) - Meaning Representations from Trajectories in Autoregressive Models [106.63181745054571]
We propose to extract meaning representations from autoregressive language models by considering the distribution of all possible trajectories extending an input text.
This strategy is prompt-free, does not require fine-tuning, and is applicable to any pre-trained autoregressive model.
We empirically show that the representations obtained from large models align well with human annotations, outperform other zero-shot and prompt-free methods on semantic similarity tasks, and can be used to solve more complex entailment and containment tasks that standard embeddings cannot handle.
arXiv Detail & Related papers (2023-10-23T04:35:58Z) - Simple Mechanisms for Representing, Indexing and Manipulating Concepts [46.715152257557804]
We will argue that learning a concept could be done by looking at its moment statistics matrix to generate a concrete representation or signature of that concept.
When the concepts are intersected', signatures of the concepts can be used to find a common theme across a number of related intersected' concepts.
arXiv Detail & Related papers (2023-10-18T17:54:29Z) - MetaSRL++: A Uniform Scheme for Modelling Deeper Semantics [0.0]
This paper argues that in order to arrive at such a scheme, we also need a common modelling scheme.
It introduces MetaSRL++, a uniform, language- and modality-independent modelling scheme based on Semantic Graphs.
arXiv Detail & Related papers (2023-05-16T15:26:52Z) - The Conceptual VAE [7.15767183672057]
We present a new model of concepts, based on the framework of variational autoencoders.
The model is inspired by, and closely related to, the Beta-VAE model of concepts.
We show how the model can be used as a concept classifier, and how it can be adapted to learn from fewer labels per instance.
arXiv Detail & Related papers (2022-03-21T17:27:28Z) - VAE-CE: Visual Contrastive Explanation using Disentangled VAEs [3.5027291542274357]
Variational Autoencoder-based Contrastive Explanation (VAE-CE)
We build the model using a disentangled VAE, extended with a new supervised method for disentangling individual dimensions.
An analysis on synthetic data and MNIST shows that the approaches to both disentanglement and explanation provide benefits over other methods.
arXiv Detail & Related papers (2021-08-20T13:15:24Z) - Formalising Concepts as Grounded Abstractions [68.24080871981869]
This report shows how representation learning can be used to induce concepts from raw data.
The main technical goal of this report is to show how techniques from representation learning can be married with a lattice-theoretic formulation of conceptual spaces.
arXiv Detail & Related papers (2021-01-13T15:22:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.