Vector-based Representation is the Key: A Study on Disentanglement and
Compositional Generalization
- URL: http://arxiv.org/abs/2305.18063v1
- Date: Mon, 29 May 2023 13:05:15 GMT
- Title: Vector-based Representation is the Key: A Study on Disentanglement and
Compositional Generalization
- Authors: Tao Yang, Yuwang Wang, Cuiling Lan, Yan Lu, Nanning Zheng
- Abstract summary: We show that it is possible to achieve both good concept recognition and novel concept composition.
We propose a method to reform the scalar-based disentanglement works to be vector-based to increase both capabilities.
- Score: 77.57425909520167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recognizing elementary underlying concepts from observations
(disentanglement) and generating novel combinations of these concepts
(compositional generalization) are fundamental abilities for humans to support
rapid knowledge learning and generalize to new tasks, with which the deep
learning models struggle. Towards human-like intelligence, various works on
disentangled representation learning have been proposed, and recently some
studies on compositional generalization have been presented. However, few works
study the relationship between disentanglement and compositional
generalization, and the observed results are inconsistent. In this paper, we
study several typical disentangled representation learning works in terms of
both disentanglement and compositional generalization abilities, and we provide
an important insight: vector-based representation (using a vector instead of a
scalar to represent a concept) is the key to empower both good disentanglement
and strong compositional generalization. This insight also resonates the
neuroscience research that the brain encodes information in neuron population
activity rather than individual neurons. Motivated by this observation, we
further propose a method to reform the scalar-based disentanglement works
($\beta$-TCVAE and FactorVAE) to be vector-based to increase both capabilities.
We investigate the impact of the dimensions of vector-based representation and
one important question: whether better disentanglement indicates higher
compositional generalization. In summary, our study demonstrates that it is
possible to achieve both good concept recognition and novel concept
composition, contributing an important step towards human-like intelligence.
Related papers
- CoLiDR: Concept Learning using Aggregated Disentangled Representations [29.932706137805713]
Interpretability of Deep Neural Networks using concept-based models offers a promising way to explain model behavior through human-understandable concepts.
A parallel line of research focuses on disentangling the data distribution into its underlying generative factors, in turn explaining the data generation process.
While both directions have received extensive attention, little work has been done on explaining concepts in terms of generative factors to unify mathematically disentangled representations and human-understandable concepts.
arXiv Detail & Related papers (2024-07-27T16:55:14Z) - Improving Compositional Generalization Using Iterated Learning and
Simplicial Embeddings [19.667133565610087]
Compositional generalization is easy for humans but hard for deep neural networks.
We propose to improve this ability by using iterated learning on models with simplicial embeddings.
We show that this combination of changes improves compositional generalization over other approaches.
arXiv Detail & Related papers (2023-10-28T18:30:30Z) - Compositional Generalization in Unsupervised Compositional
Representation Learning: A Study on Disentanglement and Emergent Language [48.37815764394315]
We study three unsupervised representation learning algorithms on two datasets that allow directly testing compositional generalization.
We find that directly using the bottleneck representation with simple models and few labels may lead to worse generalization than using representations from layers before or after the learned representation itself.
Surprisingly, we find that increasing pressure to produce a disentangled representation produces representations with worse generalization, while representations from EL models show strong compositional generalization.
arXiv Detail & Related papers (2022-10-02T10:35:53Z) - Learning Algebraic Representation for Systematic Generalization in
Abstract Reasoning [109.21780441933164]
We propose a hybrid approach to improve systematic generalization in reasoning.
We showcase a prototype with algebraic representation for the abstract spatial-temporal task of Raven's Progressive Matrices (RPM)
We show that the algebraic representation learned can be decoded by isomorphism to generate an answer.
arXiv Detail & Related papers (2021-11-25T09:56:30Z) - A Minimalist Dataset for Systematic Generalization of Perception,
Syntax, and Semantics [131.93113552146195]
We present a new dataset, Handwritten arithmetic with INTegers (HINT), to examine machines' capability of learning generalizable concepts.
In HINT, machines are tasked with learning how concepts are perceived from raw signals such as images.
We undertake extensive experiments with various sequence-to-sequence models, including RNNs, Transformers, and GPT-3.
arXiv Detail & Related papers (2021-03-02T01:32:54Z) - Concepts, Properties and an Approach for Compositional Generalization [2.0559497209595823]
This report connects a series of our work for compositional generalization, and summarizes an approach.
The approach uses architecture design and regularization to regulate information of representations.
We hope this work would be helpful to clarify fundamentals of compositional generalization and lead to advance artificial intelligence.
arXiv Detail & Related papers (2021-02-08T14:22:30Z) - Concept Learners for Few-Shot Learning [76.08585517480807]
We propose COMET, a meta-learning method that improves generalization ability by learning to learn along human-interpretable concept dimensions.
We evaluate our model on few-shot tasks from diverse domains, including fine-grained image classification, document categorization and cell type annotation.
arXiv Detail & Related papers (2020-07-14T22:04:17Z) - Compositional Generalization by Learning Analytical Expressions [87.15737632096378]
A memory-augmented neural model is connected with analytical expressions to achieve compositional generalization.
Experiments on the well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization.
arXiv Detail & Related papers (2020-06-18T15:50:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.