Ask to Know More: Generating Counterfactual Explanations for Fake Claims
- URL: http://arxiv.org/abs/2206.04869v2
- Date: Tue, 14 Jun 2022 05:16:10 GMT
- Title: Ask to Know More: Generating Counterfactual Explanations for Fake Claims
- Authors: Shih-Chieh Dai, Yi-Li Hsu, Aiping Xiong, and Lun-Wei Ku
- Abstract summary: We propose elucidating fact checking predictions using counterfactual explanations to help people understand why a piece of news was identified as fake.
In this work, generating counterfactual explanations for fake news involves three steps: asking good questions, finding contradictions, and reasoning appropriately.
Results suggest that the proposed approach generates the most helpful explanations compared to state-of-the-art methods.
- Score: 11.135087647482145
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automated fact checking systems have been proposed that quickly provide
veracity prediction at scale to mitigate the negative influence of fake news on
people and on public opinion. However, most studies focus on veracity
classifiers of those systems, which merely predict the truthfulness of news
articles. We posit that effective fact checking also relies on people's
understanding of the predictions. In this paper, we propose elucidating fact
checking predictions using counterfactual explanations to help people
understand why a specific piece of news was identified as fake. In this work,
generating counterfactual explanations for fake news involves three steps:
asking good questions, finding contradictions, and reasoning appropriately. We
frame this research question as contradicted entailment reasoning through
question answering (QA). We first ask questions towards the false claim and
retrieve potential answers from the relevant evidence documents. Then, we
identify the most contradictory answer to the false claim by use of an
entailment classifier. Finally, a counterfactual explanation is created using a
matched QA pair with three different counterfactual explanation forms.
Experiments are conducted on the FEVER dataset for both system and human
evaluations. Results suggest that the proposed approach generates the most
helpful explanations compared to state-of-the-art methods.
Related papers
- Explainable Fake News Detection With Large Language Model via Defense Among Competing Wisdom [19.027690459807197]
We propose a novel defense-based explainable fake news detection framework.
Specifically, we first propose an evidence extraction module to split the wisdom of crowds into two competing parties and respectively detect salient evidences.
We then design a prompt-based module that utilizes a large language model to generate justifications by inferring reasons towards two possible veracities.
arXiv Detail & Related papers (2024-05-06T11:24:13Z) - QACHECK: A Demonstration System for Question-Guided Multi-Hop
Fact-Checking [68.06355980166053]
We propose the Question-guided Multi-hop Fact-Checking (QACHECK) system.
It guides the model's reasoning process by asking a series of questions critical for verifying a claim.
It provides the source of evidence supporting each question, fostering a transparent, explainable, and user-friendly fact-checking process.
arXiv Detail & Related papers (2023-10-11T15:51:53Z) - ExClaim: Explainable Neural Claim Verification Using Rationalization [8.369720566612111]
ExClaim attempts to provide an explainable claim verification system with foundational evidence.
Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim.
Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes.
arXiv Detail & Related papers (2023-01-21T08:26:27Z) - Logical Satisfiability of Counterfactuals for Faithful Explanations in
NLI [60.142926537264714]
We introduce the methodology of Faithfulness-through-Counterfactuals.
It generates a counterfactual hypothesis based on the logical predicates expressed in the explanation.
It then evaluates if the model's prediction on the counterfactual is consistent with that expressed logic.
arXiv Detail & Related papers (2022-05-25T03:40:59Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - The Unreliability of Explanations in Few-Shot In-Context Learning [50.77996380021221]
We focus on two NLP tasks that involve reasoning over text, namely question answering and natural language inference.
We show that explanations judged as good by humans--those that are logically consistent with the input--usually indicate more accurate predictions.
We present a framework for calibrating model predictions based on the reliability of the explanations.
arXiv Detail & Related papers (2022-05-06T17:57:58Z) - Features of Explainability: How users understand counterfactual and
causal explanations for categorical and continuous features in XAI [10.151828072611428]
Counterfactual explanations are increasingly used to address interpretability, recourse, and bias in AI decisions.
We tested the effects of counterfactual and causal explanations on the objective accuracy of users predictions.
We also found that users understand explanations referring to categorical features more readily than those referring to continuous features.
arXiv Detail & Related papers (2022-04-21T15:01:09Z) - Explainable Fact-checking through Question Answering [17.1138216746642]
We propose generating questions and answers from claims and answering the same questions from evidence.
We also propose an answer comparison model with an attention mechanism attached to each question.
Experimental results show that the proposed model can achieve state-of-the-art performance while providing reasonable explainable capabilities.
arXiv Detail & Related papers (2021-10-11T15:55:11Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - SCOUT: Self-aware Discriminant Counterfactual Explanations [78.79534272979305]
The problem of counterfactual visual explanations is considered.
A new family of discriminant explanations is introduced.
The resulting counterfactual explanations are optimization free and thus much faster than previous methods.
arXiv Detail & Related papers (2020-04-16T17:05:49Z) - Reasoning on Knowledge Graphs with Debate Dynamics [27.225048123690243]
We propose a novel method for automatic reasoning on knowledge graphs based on debate dynamics.
The main idea is to frame the task of triple classification as a debate game between two reinforcement learning agents.
We benchmark our method on the triple classification and link prediction task.
arXiv Detail & Related papers (2020-01-02T14:44:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.