Compositional Generalization by Learning Analytical Expressions
- URL: http://arxiv.org/abs/2006.10627v2
- Date: Sat, 24 Oct 2020 03:47:49 GMT
- Title: Compositional Generalization by Learning Analytical Expressions
- Authors: Qian Liu, Shengnan An, Jian-Guang Lou, Bei Chen, Zeqi Lin, Yan Gao,
Bin Zhou, Nanning Zheng, Dongmei Zhang
- Abstract summary: A memory-augmented neural model is connected with analytical expressions to achieve compositional generalization.
Experiments on the well-known benchmark SCAN demonstrate that our model seizes a great ability of compositional generalization.
- Score: 87.15737632096378
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Compositional generalization is a basic and essential intellective capability
of human beings, which allows us to recombine known parts readily. However,
existing neural network based models have been proven to be extremely deficient
in such a capability. Inspired by work in cognition which argues
compositionality can be captured by variable slots with symbolic functions, we
present a refreshing view that connects a memory-augmented neural model with
analytical expressions, to achieve compositional generalization. Our model
consists of two cooperative neural modules, Composer and Solver, fitting well
with the cognitive argument while being able to be trained in an end-to-end
manner via a hierarchical reinforcement learning algorithm. Experiments on the
well-known benchmark SCAN demonstrate that our model seizes a great ability of
compositional generalization, solving all challenges addressed by previous
works with 100% accuracies.
Related papers
- From Frege to chatGPT: Compositionality in language, cognition, and deep neural networks [0.0]
We review recent empirical work from machine learning for a broad audience in philosophy, cognitive science, and neuroscience.
In particular, our review emphasizes two approaches to endowing neural networks with compositional generalization capabilities.
We conclude by discussing the implications that these findings may have for the study of compositionality in human cognition.
arXiv Detail & Related papers (2024-05-24T02:36:07Z) - Improving Compositional Generalization Using Iterated Learning and
Simplicial Embeddings [19.667133565610087]
Compositional generalization is easy for humans but hard for deep neural networks.
We propose to improve this ability by using iterated learning on models with simplicial embeddings.
We show that this combination of changes improves compositional generalization over other approaches.
arXiv Detail & Related papers (2023-10-28T18:30:30Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - Vector-based Representation is the Key: A Study on Disentanglement and
Compositional Generalization [77.57425909520167]
We show that it is possible to achieve both good concept recognition and novel concept composition.
We propose a method to reform the scalar-based disentanglement works to be vector-based to increase both capabilities.
arXiv Detail & Related papers (2023-05-29T13:05:15Z) - Neural-Symbolic Recursive Machine for Systematic Generalization [113.22455566135757]
We introduce the Neural-Symbolic Recursive Machine (NSR), whose core is a Grounded Symbol System (GSS)
NSR integrates neural perception, syntactic parsing, and semantic reasoning.
We evaluate NSR's efficacy across four challenging benchmarks designed to probe systematic generalization capabilities.
arXiv Detail & Related papers (2022-10-04T13:27:38Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - Towards a Predictive Processing Implementation of the Common Model of
Cognition [79.63867412771461]
We describe an implementation of the common model of cognition grounded in neural generative coding and holographic associative memory.
The proposed system creates the groundwork for developing agents that learn continually from diverse tasks as well as model human performance at larger scales.
arXiv Detail & Related papers (2021-05-15T22:55:23Z) - Learning Evolved Combinatorial Symbols with a Neuro-symbolic Generative
Model [35.341634678764066]
Humans have the ability to rapidly understand rich concepts from limited data.
We propose a neuro-symbolic generative model which combines the strengths of previous approaches to concept learning.
arXiv Detail & Related papers (2021-04-16T17:57:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.