Relational reasoning and generalization using non-symbolic neural
networks
- URL: http://arxiv.org/abs/2006.07968v3
- Date: Sun, 1 May 2022 19:39:54 GMT
- Title: Relational reasoning and generalization using non-symbolic neural
networks
- Authors: Atticus Geiger, Alexandra Carstensen, Michael C. Frank, and
Christopher Potts
- Abstract summary: Previous work suggested that neural networks were not suitable models of human relational reasoning because they could not represent mathematically identity, the most basic form of equality.
We find neural networks are able to learn basic equality (mathematical identity), (2) sequential equality problems (learning ABA-patterned sequences) with only positive training instances, and (3) a complex, hierarchical equality problem with only basic equality training instances.
These results suggest that essential aspects of symbolic reasoning can emerge from data-driven, non-symbolic learning processes.
- Score: 66.07793171648161
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The notion of equality (identity) is simple and ubiquitous, making it a key
case study for broader questions about the representations supporting abstract
relational reasoning. Previous work suggested that neural networks were not
suitable models of human relational reasoning because they could not represent
mathematically identity, the most basic form of equality. We revisit this
question. In our experiments, we assess out-of-sample generalization of
equality using both arbitrary representations and representations that have
been pretrained on separate tasks to imbue them with structure. We find neural
networks are able to learn (1) basic equality (mathematical identity), (2)
sequential equality problems (learning ABA-patterned sequences) with only
positive training instances, and (3) a complex, hierarchical equality problem
with only basic equality training instances ("zero-shot'" generalization). In
the two latter cases, our models perform tasks proposed in previous work to
demarcate human-unique symbolic abilities. These results suggest that essential
aspects of symbolic reasoning can emerge from data-driven, non-symbolic
learning processes.
Related papers
- Take A Step Back: Rethinking the Two Stages in Visual Reasoning [57.16394309170051]
This paper revisits visual reasoning with a two-stage perspective.
It is more efficient to implement symbolization via separated encoders for different data domains while using a shared reasoner.
The proposed two-stage framework achieves impressive generalization ability on various visual reasoning tasks.
arXiv Detail & Related papers (2024-07-29T02:56:19Z) - Gradient-based inference of abstract task representations for generalization in neural networks [5.794537047184604]
We show that gradients backpropagated through a neural network to a task representation layer are an efficient way to infer current task demands.
We demonstrate that gradient-based inference provides higher learning efficiency and generalization to novel tasks and limits.
arXiv Detail & Related papers (2024-07-24T15:28:08Z) - Deep Regression Representation Learning with Topology [57.203857643599875]
We study how the effectiveness of a regression representation is influenced by its topology.
We introduce PH-Reg, a regularizer that matches the intrinsic dimension and topology of the feature space with the target space.
Experiments on synthetic and real-world regression tasks demonstrate the benefits of PH-Reg.
arXiv Detail & Related papers (2024-04-22T06:28:41Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - A Hybrid System for Systematic Generalization in Simple Arithmetic
Problems [70.91780996370326]
We propose a hybrid system capable of solving arithmetic problems that require compositional and systematic reasoning over sequences of symbols.
We show that the proposed system can accurately solve nested arithmetical expressions even when trained only on a subset including the simplest cases.
arXiv Detail & Related papers (2023-06-29T18:35:41Z) - Divide and Conquer: Answering Questions with Object Factorization and
Compositional Reasoning [30.392986232906107]
We propose an integral framework consisting of a principled object factorization method and a novel neural module network.
Our factorization method decomposes objects based on their key characteristics, and automatically derives prototypes that represent a wide range of objects.
With these prototypes encoding important semantics, the proposed network then correlates objects by measuring their similarity on a common semantic space.
It is capable of answering questions with diverse objects regardless of their availability during training, and overcoming the issues of biased question-answer distributions.
arXiv Detail & Related papers (2023-03-18T19:37:28Z) - Do Deep Neural Networks Capture Compositionality in Arithmetic
Reasoning? [31.692400722222278]
We introduce a skill tree on compositionality in arithmetic symbolic reasoning that defines the hierarchical levels of complexity along with three compositionality dimensions: systematicity, productivity, and substitutivity.
Our experiments revealed that among the three types of composition, the models struggled most with systematicity, performing poorly even with relatively simple compositions.
arXiv Detail & Related papers (2023-02-15T18:59:04Z) - Logic Tensor Networks [9.004005678155023]
We present Logic Networks (LTN), a neurosymbolic formalism and computational model that supports learning and reasoning.
We show that LTN provides a uniform language for the specification and the computation of several AI tasks.
arXiv Detail & Related papers (2020-12-25T22:30:18Z) - Few-shot Visual Reasoning with Meta-analogical Contrastive Learning [141.2562447971]
We propose to solve a few-shot (or low-shot) visual reasoning problem, by resorting to analogical reasoning.
We extract structural relationships between elements in both domains, and enforce them to be as similar as possible with analogical learning.
We validate our method on RAVEN dataset, on which it outperforms state-of-the-art method, with larger gains when the training data is scarce.
arXiv Detail & Related papers (2020-07-23T14:00:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.