LLMs with Chain-of-Thought Are Non-Causal Reasoners
- URL: http://arxiv.org/abs/2402.16048v1
- Date: Sun, 25 Feb 2024 10:13:04 GMT
- Title: LLMs with Chain-of-Thought Are Non-Causal Reasoners
- Authors: Guangsheng Bao, Hongbo Zhang, Linyi Yang, Cunxiang Wang, Yue Zhang
- Abstract summary: We employ causal analysis to assess the cause-effect relationship between CoTs/instructions and answers in Large Language Models.
By comparing the implied SCM with that of human reasoning, we highlight discrepancies between LLM and human reasoning processes.
In-context learning, supervised fine-tuning, and reinforcement learning on human feedback significantly impact the causal relations.
- Score: 34.18612597843633
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper explores the role of the Chain of Thought (CoT) in Large Language
Models (LLMs) reasoning. Despite its potential to improve task performance, our
analysis reveals a surprising frequency of correct answers following incorrect
CoTs and vice versa. We employ causal analysis to assess the cause-effect
relationship between CoTs/instructions and answers in LLMs, uncovering the
Structural Causal Model (SCM) that LLMs approximate. By comparing the implied
SCM with that of human reasoning, we highlight discrepancies between LLM and
human reasoning processes. We further examine the factors influencing the
causal structure of the implied SCM, revealing that in-context learning,
supervised fine-tuning, and reinforcement learning on human feedback
significantly impact the causal relations. We release the code and results at
https://github.com/StevenZHB/CoT_Causal_Analysis.
Related papers
- On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models [25.029579061612456]
Large Language Models (LLMs) are increasingly being employed in real-world applications in critical domains such as healthcare.
It is important to ensure that the Chain-of-Thought (CoT) reasoning generated by these models faithfully captures their underlying behavior.
arXiv Detail & Related papers (2024-06-15T13:16:44Z) - Towards Faithful Chain-of-Thought: Large Language Models are Bridging Reasoners [19.40385041079461]
Large language models (LLMs) suffer from serious unfaithful chain-of-thought (CoT) issues.
We first study the CoT faithfulness issue at the granularity of CoT steps, identify two reasoning paradigms.
We then conduct a joint analysis of the causal relevance among the context, CoT, and answer during reasoning.
arXiv Detail & Related papers (2024-05-29T09:17:46Z) - Aggregation of Reasoning: A Hierarchical Framework for Enhancing Answer Selection in Large Language Models [84.15513004135576]
Current research enhances the reasoning performance of Large Language Models (LLMs) by sampling multiple reasoning chains and ensembling based on the answer frequency.
This approach fails in scenarios where the correct answers are in the minority.
We introduce a hierarchical reasoning aggregation framework AoR, which selects answers based on the evaluation of reasoning chains.
arXiv Detail & Related papers (2024-05-21T17:12:19Z) - Evaluating Interventional Reasoning Capabilities of Large Language Models [58.52919374786108]
Large language models (LLMs) can estimate causal effects under interventions on different parts of a system.
We conduct empirical analyses to evaluate whether LLMs can accurately update their knowledge of a data-generating process in response to an intervention.
We create benchmarks that span diverse causal graphs (e.g., confounding, mediation) and variable types, and enable a study of intervention-based reasoning.
arXiv Detail & Related papers (2024-04-08T14:15:56Z) - Cause and Effect: Can Large Language Models Truly Understand Causality? [1.2334534968968969]
This research proposes a novel architecture called Context Aware Reasoning Enhancement with Counterfactual Analysis(CARE CA) framework.
The proposed framework incorporates an explicit causal detection module with ConceptNet and counterfactual statements, as well as implicit causal detection through Large Language Models.
The knowledge from ConceptNet enhances the performance of multiple causal reasoning tasks such as causal discovery, causal identification and counterfactual reasoning.
arXiv Detail & Related papers (2024-02-28T08:02:14Z) - Direct Evaluation of Chain-of-Thought in Multi-hop Reasoning with Knowledge Graphs [52.42505579545893]
Large language models (LLMs) demonstrate strong reasoning abilities when prompted to generate chain-of-thought explanations alongside answers.
We propose a novel discriminative and generative CoT evaluation paradigm to assess LLMs' knowledge of reasoning and the accuracy of the generated CoT.
arXiv Detail & Related papers (2024-02-17T05:22:56Z) - The Impact of Reasoning Step Length on Large Language Models [40.546685248243534]
Chain of Thought (CoT) is significant in improving the reasoning abilities of large language models.
We investigate the correlation between the effectiveness of CoT and the length of reasoning steps in prompts.
arXiv Detail & Related papers (2024-01-10T04:37:38Z) - CLadder: Assessing Causal Reasoning in Language Models [82.8719238178569]
We investigate whether large language models (LLMs) can coherently reason about causality.
We propose a new NLP task, causal inference in natural language, inspired by the "causal inference engine" postulated by Judea Pearl et al.
arXiv Detail & Related papers (2023-12-07T15:12:12Z) - Inducing Causal Structure for Abstractive Text Summarization [76.1000380429553]
We introduce a Structural Causal Model (SCM) to induce the underlying causal structure of the summarization data.
We propose a Causality Inspired Sequence-to-Sequence model (CI-Seq2Seq) to learn the causal representations that can mimic the causal factors.
Experimental results on two widely used text summarization datasets demonstrate the advantages of our approach.
arXiv Detail & Related papers (2023-08-24T16:06:36Z) - Towards Understanding Chain-of-Thought Prompting: An Empirical Study of
What Matters [82.84696222087396]
Chain-of-Thought (CoT) prompting can dramatically improve the multi-step reasoning abilities of large language models (LLMs)
We show that CoT reasoning is possible even with invalid demonstrations.
arXiv Detail & Related papers (2022-12-20T05:20:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.