Logical forms complement probability in understanding language model (and human) performance
- URL: http://arxiv.org/abs/2502.09589v2
- Date: Mon, 17 Feb 2025 18:56:30 GMT
- Title: Logical forms complement probability in understanding language model (and human) performance
- Authors: Yixuan Wang, Freda Shi,
- Abstract summary: This work conducts a systematic investigation of large language models' ability to perform logical reasoning in natural language.<n>We introduce a controlled dataset of hypothetical and disjunctive syllogisms in propositional and modal logic.<n>We show similarities and discrepancies between the logical reasoning performances of humans and LLMs by collecting and comparing behavioral data from both.
- Score: 14.694876851134273
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: With the increasing interest in using large language models (LLMs) for planning in natural language, understanding their behaviors becomes an important research question. This work conducts a systematic investigation of LLMs' ability to perform logical reasoning in natural language. We introduce a controlled dataset of hypothetical and disjunctive syllogisms in propositional and modal logic and use it as the testbed for understanding LLM performance. Our results lead to novel insights in predicting LLM behaviors: in addition to the probability of input (Gonen et al., 2023; McCoy et al., 2024), logical forms should be considered as important factors. In addition, we show similarities and discrepancies between the logical reasoning performances of humans and LLMs by collecting and comparing behavioral data from both.
Related papers
- Emergence of psychopathological computations in large language models [22.78614613457714]
We propose a computational-theoretical framework to provide an account of psychopathology applicable to large language models.
Our work alludes to the possibility of AI systems with psychopathological behaviors in the near future.
arXiv Detail & Related papers (2025-04-10T15:36:30Z) - Enhancing Large Language Models through Neuro-Symbolic Integration and Ontological Reasoning [0.0]
Large Language Models (LLMs) demonstrate impressive capabilities in natural language processing but suffer from inaccuracies and logical inconsistencies known as hallucinations.
We propose a neuro-symbolic approach integrating symbolic ontological reasoning and machine learning methods to enhance the consistency and reliability of LLM outputs.
arXiv Detail & Related papers (2025-04-10T10:39:24Z) - LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning [49.58786377307728]
This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning.
We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines.
We investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Alignment Between the Decision-Making Logic of LLMs and Human Cognition: A Case Study on Legal LLMs [43.67312098562139]
This paper presents a method to evaluate the alignment between the decision-making logic of Large Language Models and human cognition.
We quantify the interactions encoded by the LLM as primitive decision-making logic.
Experiments show that even when the language generation results appear correct, a significant portion of the internal inference logic contains notable issues.
arXiv Detail & Related papers (2024-10-06T08:33:39Z) - Does Reasoning Emerge? Examining the Probabilities of Causation in Large Language Models [6.922021128239465]
Recent advances in AI have been driven by the capabilities of large language models (LLMs)
This paper introduces a framework that is both theoretical and practical, aimed at assessing how effectively LLMs are able to replicate real-world reasoning mechanisms.
arXiv Detail & Related papers (2024-08-15T15:19:11Z) - Categorical Syllogisms Revisited: A Review of the Logical Reasoning Abilities of LLMs for Analyzing Categorical Syllogism [62.571419297164645]
This paper provides a systematic overview of prior works on the logical reasoning ability of large language models for analyzing categorical syllogisms.
We first investigate all the possible variations for the categorical syllogisms from a purely logical perspective.
We then examine the underlying configurations (i.e., mood and figure) tested by the existing datasets.
arXiv Detail & Related papers (2024-06-26T21:17:20Z) - LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - Do Large Language Models Mirror Cognitive Language Processing? [43.68923267228057]
Large Language Models (LLMs) have demonstrated remarkable abilities in text comprehension and logical reasoning.<n>Brain cognitive processing signals are typically utilized to study human language processing.
arXiv Detail & Related papers (2024-02-28T03:38:20Z) - CLOMO: Counterfactual Logical Modification with Large Language Models [109.60793869938534]
We introduce a novel task, Counterfactual Logical Modification (CLOMO), and a high-quality human-annotated benchmark.
In this task, LLMs must adeptly alter a given argumentative text to uphold a predetermined logical relationship.
We propose an innovative evaluation metric, the Self-Evaluation Score (SES), to directly evaluate the natural language output of LLMs.
arXiv Detail & Related papers (2023-11-29T08:29:54Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.