Explainable Claim Verification via Knowledge-Grounded Reasoning with
Large Language Models
- URL: http://arxiv.org/abs/2310.05253v2
- Date: Fri, 20 Oct 2023 02:31:21 GMT
- Title: Explainable Claim Verification via Knowledge-Grounded Reasoning with
Large Language Models
- Authors: Haoran Wang, Kai Shu
- Abstract summary: This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK) Reasoning.
It can verify complex claims and generate explanations without the need for annotated evidence.
Our experiment results indicate that FOLK outperforms strong baselines on three datasets.
- Score: 36.91218391728405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Claim verification plays a crucial role in combating misinformation. While
existing works on claim verification have shown promising results, a crucial
piece of the puzzle that remains unsolved is to understand how to verify claims
without relying on human-annotated data, which is expensive to create at a
large scale. Additionally, it is important for models to provide comprehensive
explanations that can justify their decisions and assist human fact-checkers.
This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK)
Reasoning that can verify complex claims and generate explanations without the
need for annotated evidence using Large Language Models (LLMs). FOLK leverages
the in-context learning ability of LLMs to translate the claim into a
First-Order-Logic (FOL) clause consisting of predicates, each corresponding to
a sub-claim that needs to be verified. Then, FOLK performs FOL-Guided reasoning
over a set of knowledge-grounded question-and-answer pairs to make veracity
predictions and generate explanations to justify its decision-making process.
This process makes our model highly explanatory, providing clear explanations
of its reasoning process in human-readable form. Our experiment results
indicate that FOLK outperforms strong baselines on three datasets encompassing
various claim verification challenges. Our code and data are available.
Related papers
- CHECKWHY: Causal Fact Verification via Argument Structure [19.347690600431463]
CheckWhy is a dataset tailored to a novel causal fact verification task.
CheckWhy consists of over 19K "why" claim-evidence-argument structure triplets with supports, refutes, and not enough info labels.
arXiv Detail & Related papers (2024-08-20T15:03:35Z) - Evaluating the Reliability of Self-Explanations in Large Language Models [2.8894038270224867]
We evaluate two kinds of such self-explanations - extractive and counterfactual.
Our findings reveal, that, while these self-explanations can correlate with human judgement, they do not fully and accurately follow the model's decision process.
We show that this gap can be bridged because prompting LLMs for counterfactual explanations can produce faithful, informative, and easy-to-verify results.
arXiv Detail & Related papers (2024-07-19T17:41:08Z) - Navigating the Noisy Crowd: Finding Key Information for Claim Verification [19.769771741059408]
We propose EACon, a framework designed to find key information within evidence and verify each aspect of a claim separately.
Eccon finds keywords from the claim and employs fuzzy matching to select relevant keywords for each raw evidence piece.
Eccon deconstructs the original claim into subclaims, which are then verified against both abstracted and raw evidence individually.
arXiv Detail & Related papers (2024-07-17T09:24:10Z) - Can LLMs Produce Faithful Explanations For Fact-checking? Towards
Faithful Explainable Fact-Checking via Multi-Agent Debate [75.10515686215177]
Large Language Models (LLMs) excel in text generation, but their capability for producing faithful explanations in fact-checking remains underexamined.
We propose the Multi-Agent Debate Refinement (MADR) framework, leveraging multiple LLMs as agents with diverse roles.
MADR ensures that the final explanation undergoes rigorous validation, significantly reducing the likelihood of unfaithful elements and aligning closely with the provided evidence.
arXiv Detail & Related papers (2024-02-12T04:32:33Z) - FaithLM: Towards Faithful Explanations for Large Language Models [67.29893340289779]
Large Language Models (LLMs) have become proficient in addressing complex tasks by leveraging their internal knowledge and reasoning capabilities.
The black-box nature of these models complicates the task of explaining their decision-making processes.
We introduce FaithLM to explain the decision of LLMs with natural language (NL) explanations.
arXiv Detail & Related papers (2024-02-07T09:09:14Z) - A Closer Look at the Self-Verification Abilities of Large Language Models in Logical Reasoning [73.77088902676306]
We take a closer look at the self-verification abilities of large language models (LLMs) in the context of logical reasoning.
Our main findings suggest that existing LLMs could struggle to identify fallacious reasoning steps accurately and may fall short of guaranteeing the validity of self-verification methods.
arXiv Detail & Related papers (2023-11-14T07:13:10Z) - Knowledge-Augmented Language Model Verification [68.6099592486075]
Recent Language Models (LMs) have shown impressive capabilities in generating texts with the knowledge internalized in parameters.
We propose to verify the output and the knowledge of the knowledge-augmented LMs with a separate verifier.
Our results show that the proposed verifier effectively identifies retrieval and generation errors, allowing LMs to provide more factually correct outputs.
arXiv Detail & Related papers (2023-10-19T15:40:00Z) - EX-FEVER: A Dataset for Multi-hop Explainable Fact Verification [22.785622371421876]
We present a pioneering dataset for multi-hop explainable fact verification.
With over 60,000 claims involving 2-hop and 3-hop reasoning, each is created by summarizing and modifying information from hyperlinked Wikipedia documents.
We demonstrate a novel baseline system on our EX-FEVER dataset, showcasing document retrieval, explanation generation, and claim verification.
arXiv Detail & Related papers (2023-10-15T06:46:15Z) - ExClaim: Explainable Neural Claim Verification Using Rationalization [8.369720566612111]
ExClaim attempts to provide an explainable claim verification system with foundational evidence.
Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim.
Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes.
arXiv Detail & Related papers (2023-01-21T08:26:27Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.