Explainable Fact-checking through Question Answering
- URL: http://arxiv.org/abs/2110.05369v1
- Date: Mon, 11 Oct 2021 15:55:11 GMT
- Title: Explainable Fact-checking through Question Answering
- Authors: Jing Yang, Didier Vega-Oliveros, Ta\'is Seibt and Anderson Rocha
- Abstract summary: We propose generating questions and answers from claims and answering the same questions from evidence.
We also propose an answer comparison model with an attention mechanism attached to each question.
Experimental results show that the proposed model can achieve state-of-the-art performance while providing reasonable explainable capabilities.
- Score: 17.1138216746642
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Misleading or false information has been creating chaos in some places around
the world. To mitigate this issue, many researchers have proposed automated
fact-checking methods to fight the spread of fake news. However, most methods
cannot explain the reasoning behind their decisions, failing to build trust
between machines and humans using such technology. Trust is essential for
fact-checking to be applied in the real world. Here, we address fact-checking
explainability through question answering. In particular, we propose generating
questions and answers from claims and answering the same questions from
evidence. We also propose an answer comparison model with an attention
mechanism attached to each question. Leveraging question answering as a proxy,
we break down automated fact-checking into several steps -- this separation
aids models' explainability as it allows for more detailed analysis of their
decision-making processes. Experimental results show that the proposed model
can achieve state-of-the-art performance while providing reasonable explainable
capabilities.
Related papers
- Don't Just Say "I don't know"! Self-aligning Large Language Models for Responding to Unknown Questions with Explanations [70.6395572287422]
Self-alignment method is capable of not only refusing to answer but also providing explanation to the unanswerability of unknown questions.
We conduct disparity-driven self-curation to select qualified data for fine-tuning the LLM itself for aligning the responses to unknown questions as desired.
arXiv Detail & Related papers (2024-02-23T02:24:36Z) - A Comparative and Experimental Study on Automatic Question Answering
Systems and its Robustness against Word Jumbling [0.49157446832511503]
Question answer generation is highly relevant because a frequently asked questions (FAQ) list can only have a finite amount of questions.
A model which can perform question answer generation could be able to answer completely new questions that are within the scope of the data.
In commercial applications, it can be used to increase customer satisfaction and ease of usage.
However a lot of data is generated by humans so it is susceptible to human error and this can adversely affect the model's performance.
arXiv Detail & Related papers (2023-11-27T03:17:09Z) - Counterfactual Explanation Generation with s(CASP) [2.249916681499244]
Machine learning models that automate decision-making are increasingly being used in consequential areas such as loan approvals, pretrial bail, hiring, and many more.
Unfortunately, most of these models are black-boxes, i.e., they are unable to reveal how they reach these prediction decisions.
This paper focuses on the latter problem of automatically generating counterfactual explanations.
arXiv Detail & Related papers (2023-10-23T02:05:42Z) - QACHECK: A Demonstration System for Question-Guided Multi-Hop
Fact-Checking [68.06355980166053]
We propose the Question-guided Multi-hop Fact-Checking (QACHECK) system.
It guides the model's reasoning process by asking a series of questions critical for verifying a claim.
It provides the source of evidence supporting each question, fostering a transparent, explainable, and user-friendly fact-checking process.
arXiv Detail & Related papers (2023-10-11T15:51:53Z) - Measuring and Narrowing the Compositionality Gap in Language Models [116.5228850227024]
We measure how often models can correctly answer all sub-problems but not generate the overall solution.
We present a new method, self-ask, that further improves on chain of thought.
arXiv Detail & Related papers (2022-10-07T06:50:23Z) - Ask to Know More: Generating Counterfactual Explanations for Fake Claims [11.135087647482145]
We propose elucidating fact checking predictions using counterfactual explanations to help people understand why a piece of news was identified as fake.
In this work, generating counterfactual explanations for fake news involves three steps: asking good questions, finding contradictions, and reasoning appropriately.
Results suggest that the proposed approach generates the most helpful explanations compared to state-of-the-art methods.
arXiv Detail & Related papers (2022-06-10T04:42:00Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - Single-Turn Debate Does Not Help Humans Answer Hard
Reading-Comprehension Questions [29.932543276414602]
We build a dataset of single arguments for both a correct and incorrect answer option in a debate-style set-up.
We use long contexts -- humans familiar with the context write convincing explanations for pre-selected correct and incorrect answers.
We test if those explanations allow humans who have not read the full context to more accurately determine the correct answer.
arXiv Detail & Related papers (2022-04-11T15:56:34Z) - FaVIQ: FAct Verification from Information-seeking Questions [77.7067957445298]
We construct a large-scale fact verification dataset called FaVIQ using information-seeking questions posed by real users.
Our claims are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
arXiv Detail & Related papers (2021-07-05T17:31:44Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.