Can Large Language Models Act as Symbolic Reasoners?
- URL: http://arxiv.org/abs/2410.21490v1
- Date: Mon, 28 Oct 2024 20:01:50 GMT
- Title: Can Large Language Models Act as Symbolic Reasoners?
- Authors: Rob Sullivan, Nelly Elsayed,
- Abstract summary: Large language models (LLMs) have been impressive but have been critiqued as not being able to reason about their process and conclusions derived.
This paper explores the current research in investigating symbolic reasoning and LLMs.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of Large language models (LLMs) across a broad range of domains has been impressive but have been critiqued as not being able to reason about their process and conclusions derived. This is to explain the conclusions draw, and also for determining a plan or strategy for their approach. This paper explores the current research in investigating symbolic reasoning and LLMs, and whether an LLM can inherently provide some form of reasoning or whether supporting components are necessary, and, if there is evidence for a reasoning capability, is this evident in a specific domain or is this a general capability? In addition, this paper aims to identify the current research gaps and future trends of LLM explainability, presenting a review of the literature, identifying current research into this topic and suggests areas for future work.
Related papers
- Towards Quantifying Commonsense Reasoning with Mechanistic Insights [7.124379028448955]
We argue that a proxy of commonsense reasoning can be maintained as a graphical structure.
We create an annotation scheme for capturing this implicit knowledge in the form of a graphical structure for 37 daily human activities.
We find that the created resource can be used to frame an enormous number of commonsense queries.
arXiv Detail & Related papers (2025-04-14T10:21:59Z) - How do Large Language Models Understand Relevance? A Mechanistic Interpretability Perspective [64.00022624183781]
Large language models (LLMs) can assess relevance and support information retrieval (IR) tasks.
We investigate how different LLM modules contribute to relevance judgment through the lens of mechanistic interpretability.
arXiv Detail & Related papers (2025-04-10T16:14:55Z) - A Survey on Enhancing Causal Reasoning Ability of Large Language Models [15.602788561902038]
Large language models (LLMs) have recently shown remarkable performance in language tasks and beyond.
LLMs still face challenges in handling tasks that require robust causal reasoning ability, such as health-care and economic analysis.
This paper systematically reviews literature on how to strengthen LLMs' causal reasoning ability.
arXiv Detail & Related papers (2025-03-12T12:20:31Z) - LogiDynamics: Unraveling the Dynamics of Logical Inference in Large Language Model Reasoning [49.58786377307728]
This paper adopts an exploratory approach by introducing a controlled evaluation environment for analogical reasoning.
We analyze the comparative dynamics of inductive, abductive, and deductive inference pipelines.
We investigate advanced paradigms such as hypothesis selection, verification, and refinement, revealing their potential to scale up logical inference.
arXiv Detail & Related papers (2025-02-16T15:54:53Z) - Improving Causal Reasoning in Large Language Models: A Survey [16.55801836321059]
Causal reasoning is a crucial aspect of intelligence, essential for problem-solving, decision-making, and understanding the world.
Large language models (LLMs) can generate rationales for their outputs, but their ability to reliably perform causal reasoning remains uncertain.
arXiv Detail & Related papers (2024-10-22T04:18:19Z) - From Linguistic Giants to Sensory Maestros: A Survey on Cross-Modal Reasoning with Large Language Models [56.9134620424985]
Cross-modal reasoning (CMR) is increasingly recognized as a crucial capability in the progression toward more sophisticated artificial intelligence systems.
The recent trend of deploying Large Language Models (LLMs) to tackle CMR tasks has marked a new mainstream of approaches for enhancing their effectiveness.
This survey offers a nuanced exposition of current methodologies applied in CMR using LLMs, classifying these into a detailed three-tiered taxonomy.
arXiv Detail & Related papers (2024-09-19T02:51:54Z) - Categorical Syllogisms Revisited: A Review of the Logical Reasoning Abilities of LLMs for Analyzing Categorical Syllogism [62.571419297164645]
This paper provides a systematic overview of prior works on the logical reasoning ability of large language models for analyzing categorical syllogisms.
We first investigate all the possible variations for the categorical syllogisms from a purely logical perspective.
We then examine the underlying configurations (i.e., mood and figure) tested by the existing datasets.
arXiv Detail & Related papers (2024-06-26T21:17:20Z) - Beyond Accuracy: Evaluating the Reasoning Behavior of Large Language Models -- A Survey [25.732397636695882]
Large language models (LLMs) have recently shown impressive performance on tasks involving reasoning.
Despite these successes, the depth of LLMs' reasoning abilities remains uncertain.
arXiv Detail & Related papers (2024-04-02T11:46:31Z) - Can Large Language Models Identify Authorship? [16.35265384114857]
Large Language Models (LLMs) have demonstrated an exceptional capacity for reasoning and problem-solving.
This work seeks to address three research questions: (1) Can LLMs perform zero-shot, end-to-end authorship verification effectively?
(2) Are LLMs capable of accurately attributing authorship among multiple candidates authors (e.g., 10 and 20)?
arXiv Detail & Related papers (2024-03-13T03:22:02Z) - A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning [73.77088902676306]
We take a closer look at the self-verification abilities of large language models (LLMs) in the context of logical reasoning.
Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods.
arXiv Detail & Related papers (2023-11-14T07:13:10Z) - Towards LogiGLUE: A Brief Survey and A Benchmark for Analyzing Logical Reasoning Capabilities of Language Models [56.34029644009297]
Large language models (LLMs) have demonstrated the ability to overcome various limitations of formal Knowledge Representation (KR) systems.
LLMs excel most in abductive reasoning, followed by deductive reasoning, while they are least effective at inductive reasoning.
We study single-task training, multi-task training, and "chain-of-thought" knowledge distillation fine-tuning technique to assess the performance of model.
arXiv Detail & Related papers (2023-10-02T01:00:50Z) - Large Language Models Are Not Strong Abstract Reasoners [12.354660792999269]
Large Language Models have shown tremendous performance on a variety of natural language processing tasks.
It is unclear whether LLMs can achieve human-like cognitive capabilities or whether these models are still fundamentally circumscribed.
We introduce a new benchmark for evaluating language models beyond memorization on abstract reasoning tasks.
arXiv Detail & Related papers (2023-05-31T04:50:29Z) - Sentiment Analysis in the Era of Large Language Models: A Reality Check [69.97942065617664]
This paper investigates the capabilities of large language models (LLMs) in performing various sentiment analysis tasks.
We evaluate performance across 13 tasks on 26 datasets and compare the results against small language models (SLMs) trained on domain-specific datasets.
arXiv Detail & Related papers (2023-05-24T10:45:25Z) - Large Language Models are In-Context Semantic Reasoners rather than
Symbolic Reasoners [75.85554779782048]
Large Language Models (LLMs) have excited the natural language and machine learning community over recent years.
Despite of numerous successful applications, the underlying mechanism of such in-context capabilities still remains unclear.
In this work, we hypothesize that the learned textitsemantics of language tokens do the most heavy lifting during the reasoning process.
arXiv Detail & Related papers (2023-05-24T07:33:34Z) - Towards Reasoning in Large Language Models: A Survey [11.35055307348939]
It is not yet clear to what extent large language models (LLMs) are capable of reasoning.
This paper provides a comprehensive overview of the current state of knowledge on reasoning in LLMs.
arXiv Detail & Related papers (2022-12-20T16:29:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.