Do Deep Neural Networks Capture Compositionality in Arithmetic
Reasoning?
- URL: http://arxiv.org/abs/2302.07866v1
- Date: Wed, 15 Feb 2023 18:59:04 GMT
- Title: Do Deep Neural Networks Capture Compositionality in Arithmetic
Reasoning?
- Authors: Keito Kudo, Yoichi Aoki, Tatsuki Kuribayashi, Ana Brassard, Masashi
Yoshikawa, Keisuke Sakaguchi, Kentaro Inui
- Abstract summary: We introduce a skill tree on compositionality in arithmetic symbolic reasoning that defines the hierarchical levels of complexity along with three compositionality dimensions: systematicity, productivity, and substitutivity.
Our experiments revealed that among the three types of composition, the models struggled most with systematicity, performing poorly even with relatively simple compositions.
- Score: 31.692400722222278
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Compositionality is a pivotal property of symbolic reasoning. However, how
well recent neural models capture compositionality remains underexplored in the
symbolic reasoning tasks. This study empirically addresses this question by
systematically examining recently published pre-trained seq2seq models with a
carefully controlled dataset of multi-hop arithmetic symbolic reasoning. We
introduce a skill tree on compositionality in arithmetic symbolic reasoning
that defines the hierarchical levels of complexity along with three
compositionality dimensions: systematicity, productivity, and substitutivity.
Our experiments revealed that among the three types of composition, the models
struggled most with systematicity, performing poorly even with relatively
simple compositions. That difficulty was not resolved even after training the
models with intermediate reasoning steps.
Related papers
- A Complexity-Based Theory of Compositionality [53.025566128892066]
In AI, compositional representations can enable a powerful form of out-of-distribution generalization.
Here, we propose a formal definition of compositionality that accounts for and extends our intuitions about compositionality.
The definition is conceptually simple, quantitative, grounded in algorithmic information theory, and applicable to any representation.
arXiv Detail & Related papers (2024-10-18T18:37:27Z) - LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and
Reasoning [73.98142349171552]
LOGICSEG is a holistic visual semantic that integrates neural inductive learning and logic reasoning with both rich data and symbolic knowledge.
During fuzzy logic-based continuous relaxation, logical formulae are grounded onto data and neural computational graphs, hence enabling logic-induced network training.
These designs together make LOGICSEG a general and compact neural-logic machine that is readily integrated into existing segmentation models.
arXiv Detail & Related papers (2023-09-24T05:43:19Z) - A Recursive Bateson-Inspired Model for the Generation of Semantic Formal
Concepts from Spatial Sensory Data [77.34726150561087]
This paper presents a new symbolic-only method for the generation of hierarchical concept structures from complex sensory data.
The approach is based on Bateson's notion of difference as the key to the genesis of an idea or a concept.
The model is able to produce fairly rich yet human-readable conceptual representations without training.
arXiv Detail & Related papers (2023-07-16T15:59:13Z) - A Hybrid System for Systematic Generalization in Simple Arithmetic
Problems [70.91780996370326]
We propose a hybrid system capable of solving arithmetic problems that require compositional and systematic reasoning over sequences of symbols.
We show that the proposed system can accurately solve nested arithmetical expressions even when trained only on a subset including the simplest cases.
arXiv Detail & Related papers (2023-06-29T18:35:41Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - The paradox of the compositionality of natural language: a neural
machine translation case study [15.37696298313134]
We re-instantiate three compositionality tests from the literature and reformulate them for neural machine translation (NMT)
The results highlight two main issues: the inconsistent behaviour of NMT models and their inability to (correctly) modulate between local and global processing.
arXiv Detail & Related papers (2021-08-12T17:57:23Z) - Compositional Processing Emerges in Neural Networks Solving Math
Problems [100.80518350845668]
Recent progress in artificial neural networks has shown that when large models are trained on enough linguistic data, grammatical structure emerges in their representations.
We extend this work to the domain of mathematical reasoning, where it is possible to formulate precise hypotheses about how meanings should be composed.
Our work shows that neural networks are not only able to infer something about the structured relationships implicit in their training data, but can also deploy this knowledge to guide the composition of individual meanings into composite wholes.
arXiv Detail & Related papers (2021-05-19T07:24:42Z) - A Study of Compositional Generalization in Neural Models [22.66002315559978]
We introduce ConceptWorld, which enables the generation of images from compositional and relational concepts.
We perform experiments to test the ability of standard neural networks to generalize on relations with compositional arguments.
For simple problems, all models generalize well to close concepts but struggle with longer compositional chains.
arXiv Detail & Related papers (2020-06-16T18:29:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.