CHECKWHY: Causal Fact Verification via Argument Structure
- URL: http://arxiv.org/abs/2408.10918v2
- Date: Tue, 24 Sep 2024 07:44:27 GMT
- Title: CHECKWHY: Causal Fact Verification via Argument Structure
- Authors: Jiasheng Si, Yibo Zhao, Yingjie Zhu, Haiyang Zhu, Wenpeng Lu, Deyu Zhou,
- Abstract summary: CheckWhy is a dataset tailored to a novel causal fact verification task.
CheckWhy consists of over 19K "why" claim-evidence-argument structure triplets with supports, refutes, and not enough info labels.
- Score: 19.347690600431463
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the growing complexity of fact verification tasks, the concern with "thoughtful" reasoning capabilities is increasing. However, recent fact verification benchmarks mainly focus on checking a narrow scope of semantic factoids within claims and lack an explicit logical reasoning process. In this paper, we introduce CheckWhy, a challenging dataset tailored to a novel causal fact verification task: checking the truthfulness of the causal relation within claims through rigorous reasoning steps. CheckWhy consists of over 19K "why" claim-evidence-argument structure triplets with supports, refutes, and not enough info labels. Each argument structure is composed of connected evidence, representing the reasoning process that begins with foundational evidence and progresses toward claim establishment. Through extensive experiments on state-of-the-art models, we validate the importance of incorporating the argument structure for causal fact verification. Moreover, the automated and human evaluation of argument structure generation reveals the difficulty in producing satisfying argument structure by fine-tuned models or Chain-of-Thought prompted LLMs, leaving considerable room for future improvements.
Related papers
- Counterfactual and Semifactual Explanations in Abstract Argumentation: Formal Foundations, Complexity and Computation [19.799266797193344]
Argumentation-based systems often lack explainability while supporting decision-making processes.
Counterfactual and semifactual explanations are interpretability techniques.
We show that counterfactual and semifactual queries can be encoded in weak-constrained Argumentation Framework.
arXiv Detail & Related papers (2024-05-07T07:27:27Z) - Mitigating Misleading Chain-of-Thought Reasoning with Selective Filtering [59.495717939664246]
Large language models have manifested remarkable capabilities by leveraging chain-of-thought (CoT) reasoning techniques to solve intricate questions.
We propose a novel approach called the selective filtering reasoner (SelF-Reasoner) that assesses the entailment relationship between the question and the candidate reasoning chain.
SelF-Reasoner improves the fine-tuned T5 baseline consistently over the ScienceQA, ECQA, and LastLetter tasks.
arXiv Detail & Related papers (2024-03-28T06:28:35Z) - A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning [73.77088902676306]
We take a closer look at the self-verification abilities of large language models (LLMs) in the context of logical reasoning.
Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods.
arXiv Detail & Related papers (2023-11-14T07:13:10Z) - QACHECK: A Demonstration System for Question-Guided Multi-Hop
Fact-Checking [68.06355980166053]
We propose the Question-guided Multi-hop Fact-Checking (QACHECK) system.
It guides the model's reasoning process by asking a series of questions critical for verifying a claim.
It provides the source of evidence supporting each question, fostering a transparent, explainable, and user-friendly fact-checking process.
arXiv Detail & Related papers (2023-10-11T15:51:53Z) - Explainable Claim Verification via Knowledge-Grounded Reasoning with
Large Language Models [36.91218391728405]
This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK) Reasoning.
It can verify complex claims and generate explanations without the need for annotated evidence.
Our experiment results indicate that FOLK outperforms strong baselines on three datasets.
arXiv Detail & Related papers (2023-10-08T18:04:05Z) - Deductive Verification of Chain-of-Thought Reasoning [22.79166959432764]
Large Language Models (LLMs) benefit from Chain-of-Thought prompting in performing various reasoning tasks.
While CoT allows models to produce more comprehensive reasoning processes, its emphasis on intermediate reasoning steps can inadvertently introduce hallucinations and accumulated errors.
We propose Natural Program, a natural language-based deductive reasoning format.
arXiv Detail & Related papers (2023-06-06T17:18:56Z) - Read it Twice: Towards Faithfully Interpretable Fact Verification by
Revisiting Evidence [59.81749318292707]
We propose a fact verification model named ReRead to retrieve evidence and verify claim.
The proposed system is able to achieve significant improvements upon best-reported models under different settings.
arXiv Detail & Related papers (2023-05-02T03:23:14Z) - MetaLogic: Logical Reasoning Explanations with Fine-Grained Structure [129.8481568648651]
We propose a benchmark to investigate models' logical reasoning capabilities in complex real-life scenarios.
Based on the multi-hop chain of reasoning, the explanation form includes three main components.
We evaluate the current best models' performance on this new explanation form.
arXiv Detail & Related papers (2022-10-22T16:01:13Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - Annotating Implicit Reasoning in Arguments with Causal Links [34.77514899468729]
We focus on identifying the implicit knowledge in the form of argumentation knowledge.
Being inspired by the Argument from Consequences scheme, we propose a semi-structured template to represent such argumentation knowledge.
We show how to collect and filter high-quality implicit reasonings via crowdsourcing.
arXiv Detail & Related papers (2021-10-26T13:28:53Z) - Topic-Aware Evidence Reasoning and Stance-Aware Aggregation for Fact
Verification [19.130541561303293]
We propose a novel topic-aware evidence reasoning and stance-aware aggregation model for fact verification.
Tests conducted on two benchmark datasets demonstrate the superiority of the proposed model over several state-of-the-art approaches for fact verification.
arXiv Detail & Related papers (2021-06-02T14:33:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.