Three tiers of computation in transformers and in brain architectures
- URL: http://arxiv.org/abs/2503.04848v2
- Date: Wed, 12 Mar 2025 22:08:01 GMT
- Title: Three tiers of computation in transformers and in brain architectures
- Authors: E Graham, R Granger,
- Abstract summary: Humans effortlessly process language yet require critical training to perform arithmetic or logical reasoning tasks.<n>We show that it is the transition between tiers, rather than scaled size itself, that determines a system's capabilities.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Human language and logic abilities are computationally quantified within the well-studied grammar-automata hierarchy. We identify three hierarchical tiers and two corresponding transitions and show their correspondence to specific abilities in transformer-based language models (LMs). These emergent abilities have often been described in terms of scaling; we show that it is the transition between tiers, rather than scaled size itself, that determines a system's capabilities. Specifically, humans effortlessly process language yet require critical training to perform arithmetic or logical reasoning tasks; and LMs possess language abilities absent from predecessor systems, yet still struggle with logical processing. We submit a novel benchmark of computational power, provide empirical evaluations of humans and fifteen LMs, and, most significantly, provide a theoretically grounded framework to promote careful thinking about these crucial topics. The resulting principled analyses provide explanatory accounts of the abilities and shortfalls of LMs, and suggest actionable insights into the expansion of their logic abilities.
Related papers
- General Reasoning Requires Learning to Reason from the Get-go [19.90997698310839]
Large Language Models (LLMs) have demonstrated impressive real-world utility.<n>But their ability to reason adaptively and robustly remains fragile.<n>We propose disangling knowledge and reasoning through three key directions.
arXiv Detail & Related papers (2025-02-26T18:51:12Z) - Proof of Thought : Neurosymbolic Program Synthesis allows Robust and Interpretable Reasoning [1.3003982724617653]
Large Language Models (LLMs) have revolutionized natural language processing, yet they struggle with inconsistent reasoning.
This research introduces Proof of Thought, a framework that enhances the reliability and transparency of LLM outputs.
Key contributions include a robust type system with sort management for enhanced logical integrity, explicit representation of rules for clear distinction between factual and inferential knowledge.
arXiv Detail & Related papers (2024-09-25T18:35:45Z) - Explaining Text Similarity in Transformer Models [52.571158418102584]
Recent advances in explainable AI have made it possible to mitigate limitations by leveraging improved explanations for Transformers.
We use BiLRP, an extension developed for computing second-order explanations in bilinear similarity models, to investigate which feature interactions drive similarity in NLP models.
Our findings contribute to a deeper understanding of different semantic similarity tasks and models, highlighting how novel explainable AI methods enable in-depth analyses and corpus-level insights.
arXiv Detail & Related papers (2024-05-10T17:11:31Z) - Assessing Logical Reasoning Capabilities of Encoder-Only Transformer Models [0.13194391758295113]
We investigate the extent to which encoder-only transformer language models (LMs) can reason according to logical rules.
We show for several encoder-only LMs that they can be trained, to a reasonable degree, to determine logical validity on various datasets.
By cross-probing fine-tuned models on these datasets, we show that LMs have difficulty in transferring their putative logical reasoning ability.
arXiv Detail & Related papers (2023-12-18T21:42:34Z) - CLOMO: Counterfactual Logical Modification with Large Language Models [109.60793869938534]
We introduce a novel task, Counterfactual Logical Modification (CLOMO), and a high-quality human-annotated benchmark.
In this task, LLMs must adeptly alter a given argumentative text to uphold a predetermined logical relationship.
We propose an innovative evaluation metric, the Self-Evaluation Score (SES), to directly evaluate the natural language output of LLMs.
arXiv Detail & Related papers (2023-11-29T08:29:54Z) - Enabling High-Level Machine Reasoning with Cognitive Neuro-Symbolic
Systems [67.01132165581667]
We propose to enable high-level reasoning in AI systems by integrating cognitive architectures with external neuro-symbolic components.
We illustrate a hybrid framework centered on ACT-R and we discuss the role of generative models in recent and future applications.
arXiv Detail & Related papers (2023-11-13T21:20:17Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - In-Context Analogical Reasoning with Pre-Trained Language Models [10.344428417489237]
We explore the use of intuitive language-based abstractions to support analogy in AI systems.
Specifically, we apply large pre-trained language models (PLMs) to visual Raven's Progressive Matrices ( RPM)
We find that PLMs exhibit a striking capacity for zero-shot relational reasoning, exceeding human performance and nearing supervised vision-based methods.
arXiv Detail & Related papers (2023-05-28T04:22:26Z) - Exploring Self-supervised Logic-enhanced Training for Large Language Models [59.227222647741094]
In this paper, we make the first attempt to investigate the feasibility of incorporating logical knowledge through self-supervised post-training.
We devise an auto-regressive objective variant of MERIt and integrate it with two LLM series, i.e., FLAN-T5 and LLaMA, with parameter size ranging from 3 billion to 13 billion.
The results on two challenging logical reasoning benchmarks demonstrate the effectiveness of LogicLLM.
arXiv Detail & Related papers (2023-05-23T06:13:10Z) - Dissociating language and thought in large language models [52.39241645471213]
Large Language Models (LLMs) have come closest among all models to date to mastering human language.
We ground this distinction in human neuroscience, which has shown that formal and functional competence rely on different neural mechanisms.
Although LLMs are surprisingly good at formal competence, their performance on functional competence tasks remains spotty.
arXiv Detail & Related papers (2023-01-16T22:41:19Z) - Strong-AI Autoepistemic Robots Build on Intensional First Order Logic [0.0]
We consider the intensional First Order Logic (IFOL) as a symbolic architecture of modern robots.
We present a particular example of robots autoepistemic deduction capabilities by introduction of the special temporal $Konow$ predicate and deductive axioms.
arXiv Detail & Related papers (2022-12-14T16:23:56Z) - Learning Neuro-symbolic Programs for Language Guided Robot Manipulation [10.287265801542999]
Given a natural language instruction, and an input and an output scene, our goal is to train a neuro-symbolic model which can output a manipulation program.
Prior approaches for this task possess one of the following limitations: (i) rely on hand-coded symbols for concepts limiting generalization beyond those seen during training but require dense sub-goal supervision.
Our approach is neuro-symbolic and can handle linguistic as well as perceptual variations, is end-to-end differentiable requiring no intermediate supervision, and makes use of symbolic reasoning constructs which operate on a latent neural object-centric representation.
arXiv Detail & Related papers (2022-11-12T12:31:17Z) - DALL-E 2 Fails to Reliably Capture Common Syntactic Processes [0.0]
We analyze the ability of DALL-E 2 to capture 8 grammatical phenomena pertaining to compositionality.
We show that DALL-E 2 is unable to reliably infer meanings that are consistent with the syntax.
arXiv Detail & Related papers (2022-10-23T23:56:54Z) - LogiGAN: Learning Logical Reasoning via Adversarial Pre-training [58.11043285534766]
We present LogiGAN, an unsupervised adversarial pre-training framework for improving logical reasoning abilities of language models.
Inspired by the facilitation effect of reflective thinking in human learning, we simulate the learning-thinking process with an adversarial Generator-Verifier architecture.
Both base and large size language models pre-trained with LogiGAN demonstrate obvious performance improvement on 12 datasets.
arXiv Detail & Related papers (2022-05-18T08:46:49Z) - Pre-Trained Language Models for Interactive Decision-Making [72.77825666035203]
We describe a framework for imitation learning in which goals and observations are represented as a sequence of embeddings.
We demonstrate that this framework enables effective generalization across different environments.
For test tasks involving novel goals or novel scenes, initializing policies with language models improves task completion rates by 43.6%.
arXiv Detail & Related papers (2022-02-03T18:55:52Z) - A Minimalist Dataset for Systematic Generalization of Perception,
Syntax, and Semantics [131.93113552146195]
We present a new dataset, Handwritten arithmetic with INTegers (HINT), to examine machines' capability of learning generalizable concepts.
In HINT, machines are tasked with learning how concepts are perceived from raw signals such as images.
We undertake extensive experiments with various sequence-to-sequence models, including RNNs, Transformers, and GPT-3.
arXiv Detail & Related papers (2021-03-02T01:32:54Z) - Toward the quantification of cognition [0.0]
Most human cognitive abilities, from perception to action to memory, are shared with other species.
We seek to characterize those capabilities that are ubiquitously present among humans and absent from other species.
arXiv Detail & Related papers (2020-08-12T21:45:29Z) - Machine Common Sense [77.34726150561087]
Machine common sense remains a broad, potentially unbounded problem in artificial intelligence (AI)
This article deals with the aspects of modeling commonsense reasoning focusing on such domain as interpersonal interactions.
arXiv Detail & Related papers (2020-06-15T13:59:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.