ExClaim: Explainable Neural Claim Verification Using Rationalization
- URL: http://arxiv.org/abs/2301.08914v1
- Date: Sat, 21 Jan 2023 08:26:27 GMT
- Title: ExClaim: Explainable Neural Claim Verification Using Rationalization
- Authors: Sai Gurrapu, Lifu Huang, Feras A. Batarseh
- Abstract summary: ExClaim attempts to provide an explainable claim verification system with foundational evidence.
Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim.
Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes.
- Score: 8.369720566612111
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: With the advent of deep learning, text generation language models have
improved dramatically, with text at a similar level as human-written text. This
can lead to rampant misinformation because content can now be created cheaply
and distributed quickly. Automated claim verification methods exist to validate
claims, but they lack foundational data and often use mainstream news as
evidence sources that are strongly biased towards a specific agenda. Current
claim verification methods use deep neural network models and complex
algorithms for a high classification accuracy but it is at the expense of model
explainability. The models are black-boxes and their decision-making process
and the steps it took to arrive at a final prediction are obfuscated from the
user. We introduce a novel claim verification approach, namely: ExClaim, that
attempts to provide an explainable claim verification system with foundational
evidence. Inspired by the legal system, ExClaim leverages rationalization to
provide a verdict for the claim and justifies the verdict through a natural
language explanation (rationale) to describe the model's decision-making
process. ExClaim treats the verdict classification task as a question-answer
problem and achieves a performance of 0.93 F1 score. It provides subtasks
explanations to also justify the intermediate outcomes. Statistical and
Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy
outcomes. Ensuring claim verification systems are assured, rational, and
explainable is an essential step toward improving Human-AI trust and the
accessibility of black-box systems.
Related papers
- Fact or Fiction? Improving Fact Verification with Knowledge Graphs through Simplified Subgraph Retrievals [0.0]
We present efficient methods for verifying claims on a dataset where the evidence is in the form of structured knowledge graphs.
By simplifying the evidence retrieval process, we are able to construct models that both require less computational resources and achieve better test-set accuracy.
arXiv Detail & Related papers (2024-08-14T10:46:15Z) - AFaCTA: Assisting the Annotation of Factual Claim Detection with Reliable LLM Annotators [38.523194864405326]
AFaCTA is a novel framework that assists in the annotation of factual claims.
AFaCTA calibrates its annotation confidence with consistency along three predefined reasoning paths.
Our analyses also result in PoliClaim, a comprehensive claim detection dataset spanning diverse political topics.
arXiv Detail & Related papers (2024-02-16T20:59:57Z) - From Chaos to Clarity: Claim Normalization to Empower Fact-Checking [57.024192702939736]
Claim Normalization (aka ClaimNorm) aims to decompose complex and noisy social media posts into more straightforward and understandable forms.
We propose CACN, a pioneering approach that leverages chain-of-thought and claim check-worthiness estimation.
Our experiments demonstrate that CACN outperforms several baselines across various evaluation measures.
arXiv Detail & Related papers (2023-10-22T16:07:06Z) - Explainable Claim Verification via Knowledge-Grounded Reasoning with
Large Language Models [36.91218391728405]
This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK) Reasoning.
It can verify complex claims and generate explanations without the need for annotated evidence.
Our experiment results indicate that FOLK outperforms strong baselines on three datasets.
arXiv Detail & Related papers (2023-10-08T18:04:05Z) - Give Me More Details: Improving Fact-Checking with Latent Retrieval [58.706972228039604]
Evidence plays a crucial role in automated fact-checking.
Existing fact-checking systems either assume the evidence sentences are given or use the search snippets returned by the search engine.
We propose to incorporate full text from source documents as evidence and introduce two enriched datasets.
arXiv Detail & Related papers (2023-05-25T15:01:19Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Factual Error Correction for Abstractive Summaries Using Entity
Retrieval [57.01193722520597]
We propose an efficient factual error correction system RFEC based on entities retrieval post-editing process.
RFEC retrieves the evidence sentences from the original document by comparing the sentences with the target summary.
Next, RFEC detects the entity-level errors in the summaries by considering the evidence sentences and substitutes the wrong entities with the accurate entities from the evidence sentences.
arXiv Detail & Related papers (2022-04-18T11:35:02Z) - GERE: Generative Evidence Retrieval for Fact Verification [57.78768817972026]
We propose GERE, the first system that retrieves evidences in a generative fashion.
The experimental results on the FEVER dataset show that GERE achieves significant improvements over the state-of-the-art baselines.
arXiv Detail & Related papers (2022-04-12T03:49:35Z) - DeSePtion: Dual Sequence Prediction and Adversarial Examples for
Improved Fact-Checking [46.13738685855884]
We show that current systems for fact-checking are vulnerable to three categories of realistic challenges for fact-checking.
We present a system designed to be resilient to these "attacks" using multiple pointer networks for document selection.
We find that in handling these attacks we obtain state-of-the-art results on FEVER, largely due to improved evidence retrieval.
arXiv Detail & Related papers (2020-04-27T15:18:49Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.