The Magic of IF: Investigating Causal Reasoning Abilities in Large
Language Models of Code
- URL: http://arxiv.org/abs/2305.19213v1
- Date: Tue, 30 May 2023 17:02:58 GMT
- Title: The Magic of IF: Investigating Causal Reasoning Abilities in Large
Language Models of Code
- Authors: Xiao Liu, Da Yin, Chen Zhang, Yansong Feng, Dongyan Zhao
- Abstract summary: Causal reasoning, the ability to identify cause-and-effect relationship, is crucial in human thinking.
We show that Code-LLMs with code prompts are significantly better in causal reasoning.
- Score: 74.3873029963285
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Causal reasoning, the ability to identify cause-and-effect relationship, is
crucial in human thinking. Although large language models (LLMs) succeed in
many NLP tasks, it is still challenging for them to conduct complex causal
reasoning like abductive reasoning and counterfactual reasoning. Given the fact
that programming code may express causal relations more often and explicitly
with conditional statements like ``if``, we want to explore whether Code-LLMs
acquire better causal reasoning abilities. Our experiments show that compared
to text-only LLMs, Code-LLMs with code prompts are significantly better in
causal reasoning. We further intervene on the prompts from different aspects,
and discover that the programming structure is crucial in code prompt design,
while Code-LLMs are robust towards format perturbations.
Related papers
- LogicBench: Towards Systematic Evaluation of Logical Reasoning Ability of Large Language Models [52.03659714625452]
Recently developed large language models (LLMs) have been shown to perform remarkably well on a wide range of language understanding tasks.
But, can they really "reason" over the natural language?
This question has been receiving significant research attention and many reasoning skills such as commonsense, numerical, and qualitative have been studied.
arXiv Detail & Related papers (2024-04-23T21:08:49Z) - Language Models as Compilers: Simulating Pseudocode Execution Improves Algorithmic Reasoning in Language Models [17.76252625790628]
This paper presents Think-and-Execute, a framework that decomposes the reasoning process of language models into two steps.
With extensive experiments on seven algorithmic reasoning tasks, we demonstrate the effectiveness of Think-and-Execute.
arXiv Detail & Related papers (2024-04-03T08:49:11Z) - Cause and Effect: Can Large Language Models Truly Understand Causality? [1.2334534968968969]
This research proposes a novel architecture called Context Aware Reasoning Enhancement with Counterfactual Analysis(CARE CA) framework.
The proposed framework incorporates an explicit causal detection module with ConceptNet and counterfactual statements, as well as implicit causal detection through Large Language Models.
The knowledge from ConceptNet enhances the performance of multiple causal reasoning tasks such as causal discovery, causal identification and counterfactual reasoning.
arXiv Detail & Related papers (2024-02-28T08:02:14Z) - Code Prompting Elicits Conditional Reasoning Abilities in Text+Code LLMs [65.2379940117181]
We introduce code prompting, a chain of prompts that transforms a natural language problem into code.
We find that code prompting exhibits a high-performance boost for multiple LLMs.
Our analysis of GPT 3.5 reveals that the code formatting of the input problem is essential for performance improvement.
arXiv Detail & Related papers (2024-01-18T15:32:24Z) - CLadder: Assessing Causal Reasoning in Language Models [82.8719238178569]
We investigate whether large language models (LLMs) can coherently reason about causality.
We propose a new NLP task, causal inference in natural language, inspired by the "causal inference engine" postulated by Judea Pearl et al.
arXiv Detail & Related papers (2023-12-07T15:12:12Z) - Enhancing Zero-Shot Chain-of-Thought Reasoning in Large Language Models through Logic [19.476840373850653]
Large language models show hallucinations as their reasoning procedures are unconstrained by logical principles.
We propose LoT (Logical Thoughts), a self-improvement prompting framework that leverages principles rooted in symbolic logic.
Experimental evaluations conducted on language tasks in diverse domains, including arithmetic, commonsense, symbolic, causal inference, and social problems, demonstrate the efficacy of enhanced reasoning by logic.
arXiv Detail & Related papers (2023-09-23T11:21:12Z) - When Do Program-of-Thoughts Work for Reasoning? [51.2699797837818]
We propose complexity-impacted reasoning score (CIRS) to measure correlation between code and reasoning abilities.
Specifically, we use the abstract syntax tree to encode the structural information and calculate logical complexity.
Code will be integrated into the EasyInstruct framework at https://github.com/zjunlp/EasyInstruct.
arXiv Detail & Related papers (2023-08-29T17:22:39Z) - Code Prompting: a Neural Symbolic Method for Complex Reasoning in Large
Language Models [74.95486528482327]
We explore code prompting, a neural symbolic prompting method with both zero-shot and few-shot versions which triggers code as intermediate steps.
We conduct experiments on 7 widely-used benchmarks involving symbolic reasoning and arithmetic reasoning.
arXiv Detail & Related papers (2023-05-29T15:14:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.