Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge
Reasoning via Promoting Causal Consistency in LLMs
- URL: http://arxiv.org/abs/2308.11914v2
- Date: Mon, 4 Sep 2023 10:15:51 GMT
- Title: Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge
Reasoning via Promoting Causal Consistency in LLMs
- Authors: Ziyi Tang, Ruilin Wang, Weixing Chen, Keze Wang, Yang Liu, Tianshui
Chen, Liang Lin
- Abstract summary: We present a framework to increase faithfulness and causality for knowledge-based reasoning.
Our framework outperforms all compared state-of-the-art approaches by large margins.
- Score: 63.26541167737355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite advancements in LLMs, knowledge-based reasoning remains a
longstanding issue due to the fragility of knowledge recall and inference.
Existing methods primarily encourage LLMs to autonomously plan and solve
problems or to extensively sample reasoning chains without addressing the
conceptual and inferential fallacies. Attempting to alleviate inferential
fallacies and drawing inspiration from multi-agent collaboration, we present a
framework to increase faithfulness and causality for knowledge-based reasoning.
Specifically, we propose to employ multiple intelligent agents (i.e., reasoners
and an evaluator) to work collaboratively in a reasoning-and-consensus paradigm
for elevated reasoning faithfulness. The reasoners focus on providing solutions
with human-like causality to solve open-domain problems. On the other hand, the
\textit{evaluator} agent scrutinizes if a solution is deducible from a
non-causal perspective and if it still holds when challenged by a
counterfactual candidate. According to the extensive and comprehensive
evaluations on a variety of knowledge reasoning tasks (e.g., science question
answering and commonsense reasoning), our framework outperforms all compared
state-of-the-art approaches by large margins.
Related papers
- Improving Causal Reasoning in Large Language Models: A Survey [16.55801836321059]
Causal reasoning is a crucial aspect of intelligence, essential for problem-solving, decision-making, and understanding the world.
Large language models (LLMs) can generate rationales for their outputs, but their ability to reliably perform causal reasoning remains uncertain.
arXiv Detail & Related papers (2024-10-22T04:18:19Z) - Make LLMs better zero-shot reasoners: Structure-orientated autonomous reasoning [52.83539473110143]
We introduce a novel structure-oriented analysis method to help Large Language Models (LLMs) better understand a question.
To further improve the reliability in complex question-answering tasks, we propose a multi-agent reasoning system, Structure-oriented Autonomous Reasoning Agents (SARA)
Extensive experiments verify the effectiveness of the proposed reasoning system. Surprisingly, in some cases, the system even surpasses few-shot methods.
arXiv Detail & Related papers (2024-10-18T05:30:33Z) - Improving LLM Reasoning with Multi-Agent Tree-of-Thought Validator Agent [9.439315294704368]
Tree of Thoughts (ToT) methods have shown potential in improving reasoning for complex question-answering tasks.
A critical limitation in multi-agent reasoning is the 'Reasoner' agent's shallow exploration of reasoning paths.
We introduce a novel approach combining ToT-based Reasoner agents with a Thought Validator agent.
Our method demonstrates superior performance compared to existing techniques when evaluated on the GSM8K dataset.
arXiv Detail & Related papers (2024-09-17T19:54:37Z) - Reasoning with Large Language Models, a Survey [2.831296564800826]
This paper reviews the rapidly expanding field of prompt-based reasoning with LLMs.
Our taxonomy identifies different ways to generate, evaluate, and control multi-step reasoning.
We find that self-improvement, self-reflection, and some meta abilities of the reasoning processes are possible through the judicious use of prompts.
arXiv Detail & Related papers (2024-07-16T08:49:35Z) - The Odyssey of Commonsense Causality: From Foundational Benchmarks to Cutting-Edge Reasoning [70.16523526957162]
Understanding commonsense causality helps people understand the principles of the real world better.
Despite its significance, a systematic exploration of this topic is notably lacking.
Our work aims to provide a systematic overview, update scholars on recent advancements, and provide a pragmatic guide for beginners.
arXiv Detail & Related papers (2024-06-27T16:30:50Z) - A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning [73.77088902676306]
We take a closer look at the self-verification abilities of large language models (LLMs) in the context of logical reasoning.
Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods.
arXiv Detail & Related papers (2023-11-14T07:13:10Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - Concise and Organized Perception Facilitates Reasoning in Large Language Models [32.71672086718057]
We show that large language models (LLMs) exhibit failure patterns akin to human-like cognitive biases when dealing with disordered and irrelevant content in reasoning tasks.
We propose a novel reasoning approach named Concise and Organized Perception (COP)
COP carefully analyzes the given statements to identify the most pertinent information while eliminating redundancy efficiently.
arXiv Detail & Related papers (2023-10-05T04:47:49Z) - Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate [85.3444184685235]
We propose a Multi-Agent Debate (MAD) framework, in which multiple agents express their arguments in the state of "tit for tat" and a judge manages the debate process to obtain a final solution.
Our framework encourages divergent thinking in LLMs which would be helpful for tasks that require deep levels of contemplation.
arXiv Detail & Related papers (2023-05-30T15:25:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.