Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge
Reasoning via Promoting Causal Consistency in LLMs
- URL: http://arxiv.org/abs/2308.11914v2
- Date: Mon, 4 Sep 2023 10:15:51 GMT
- Title: Towards CausalGPT: A Multi-Agent Approach for Faithful Knowledge
Reasoning via Promoting Causal Consistency in LLMs
- Authors: Ziyi Tang, Ruilin Wang, Weixing Chen, Keze Wang, Yang Liu, Tianshui
Chen, Liang Lin
- Abstract summary: We present a framework to increase faithfulness and causality for knowledge-based reasoning.
Our framework outperforms all compared state-of-the-art approaches by large margins.
- Score: 63.26541167737355
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite advancements in LLMs, knowledge-based reasoning remains a
longstanding issue due to the fragility of knowledge recall and inference.
Existing methods primarily encourage LLMs to autonomously plan and solve
problems or to extensively sample reasoning chains without addressing the
conceptual and inferential fallacies. Attempting to alleviate inferential
fallacies and drawing inspiration from multi-agent collaboration, we present a
framework to increase faithfulness and causality for knowledge-based reasoning.
Specifically, we propose to employ multiple intelligent agents (i.e., reasoners
and an evaluator) to work collaboratively in a reasoning-and-consensus paradigm
for elevated reasoning faithfulness. The reasoners focus on providing solutions
with human-like causality to solve open-domain problems. On the other hand, the
\textit{evaluator} agent scrutinizes if a solution is deducible from a
non-causal perspective and if it still holds when challenged by a
counterfactual candidate. According to the extensive and comprehensive
evaluations on a variety of knowledge reasoning tasks (e.g., science question
answering and commonsense reasoning), our framework outperforms all compared
state-of-the-art approaches by large margins.
Related papers
- Causality can systematically address the monsters under the bench(marks) [64.36592889550431]
Benchmarks are plagued by various biases, artifacts, or leakage.
Models may behave unreliably due to poorly explored failure modes.
causality offers an ideal framework to systematically address these challenges.
arXiv Detail & Related papers (2025-02-07T17:01:37Z) - CausalEval: Towards Better Causal Reasoning in Language Models [16.55801836321059]
Causal reasoning (CR) is a crucial aspect of intelligence, essential for problem-solving, decision-making, and understanding the world.
While language models (LMs) can generate rationales for their outputs, their ability to reliably perform causal reasoning remains uncertain.
We introduce CausalEval, a review of research aimed at enhancing LMs for causal reasoning.
arXiv Detail & Related papers (2024-10-22T04:18:19Z) - CSCE: Boosting LLM Reasoning by Simultaneous Enhancing of Casual Significance and Consistency [12.961692839965115]
Chain-based reasoning methods like chain of thought (CoT) play a rising role in solving reasoning tasks for large language models (LLMs)
This paper proposes a non-chain-based reasoning framework for simultaneous consideration of causal significance and consistency.
arXiv Detail & Related papers (2024-09-20T08:28:23Z) - Disentangled Representations for Causal Cognition [0.0]
Causal cognition studies describe the main characteristics of causal learning and reasoning in human and non-human animals.
Machine and reinforcement learning research on causality represent on the one hand a concrete attempt at designing causal artificial agents.
In this work, we connect these two areas of research to build a unifying framework for causal cognition.
arXiv Detail & Related papers (2024-06-30T16:10:17Z) - The Odyssey of Commonsense Causality: From Foundational Benchmarks to Cutting-Edge Reasoning [70.16523526957162]
Understanding commonsense causality helps people understand the principles of the real world better.
Despite its significance, a systematic exploration of this topic is notably lacking.
Our work aims to provide a systematic overview, update scholars on recent advancements, and provide a pragmatic guide for beginners.
arXiv Detail & Related papers (2024-06-27T16:30:50Z) - Conceptual and Unbiased Reasoning in Language Models [98.90677711523645]
We propose a novel conceptualization framework that forces models to perform conceptual reasoning on abstract questions.
We show that existing large language models fall short on conceptual reasoning, dropping 9% to 28% on various benchmarks.
We then discuss how models can improve since high-level abstract reasoning is key to unbiased and generalizable decision-making.
arXiv Detail & Related papers (2024-03-30T00:53:53Z) - DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning
in Language Models [28.712359821231182]
Large language models (LLMs) have made remarkable strides in such multi-step reasoning on the language modality solely by leveraging the chain of thought (CoT) to mimic human thinking.
The transfer of these advancements to multimodal contexts introduces heightened challenges, including but not limited to the impractical need for labor-intensive annotation.
This study proposes a novel DDCoT prompting that maintains a critical attitude through negative-space prompting and incorporates multimodality into reasoning.
arXiv Detail & Related papers (2023-10-25T08:03:10Z) - From Heuristic to Analytic: Cognitively Motivated Strategies for
Coherent Physical Commonsense Reasoning [66.98861219674039]
Heuristic-Analytic Reasoning (HAR) strategies drastically improve the coherence of rationalizations for model decisions.
Our findings suggest that human-like reasoning strategies can effectively improve the coherence and reliability of PLM reasoning.
arXiv Detail & Related papers (2023-10-24T19:46:04Z) - Causal Inference Principles for Reasoning about Commonsense Causality [93.19149325083968]
Commonsense causality reasoning aims at identifying plausible causes and effects in natural language descriptions that are deemed reasonable by an average person.
Existing work usually relies on deep language models wholeheartedly, and is potentially susceptible to confounding co-occurrences.
Motivated by classical causal principles, we articulate the central question of CCR and draw parallels between human subjects in observational studies and natural languages.
We propose a novel framework, ROCK, to Reason O(A)bout Commonsense K(C)ausality, which utilizes temporal signals as incidental supervision.
arXiv Detail & Related papers (2022-01-31T06:12:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.