QACHECK: A Demonstration System for Question-Guided Multi-Hop
Fact-Checking
- URL: http://arxiv.org/abs/2310.07609v1
- Date: Wed, 11 Oct 2023 15:51:53 GMT
- Title: QACHECK: A Demonstration System for Question-Guided Multi-Hop
Fact-Checking
- Authors: Liangming Pan, Xinyuan Lu, Min-Yen Kan, Preslav Nakov
- Abstract summary: We propose the Question-guided Multi-hop Fact-Checking (QACHECK) system.
It guides the model's reasoning process by asking a series of questions critical for verifying a claim.
It provides the source of evidence supporting each question, fostering a transparent, explainable, and user-friendly fact-checking process.
- Score: 68.06355980166053
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Fact-checking real-world claims often requires complex, multi-step reasoning
due to the absence of direct evidence to support or refute them. However,
existing fact-checking systems often lack transparency in their
decision-making, making it challenging for users to comprehend their reasoning
process. To address this, we propose the Question-guided Multi-hop
Fact-Checking (QACHECK) system, which guides the model's reasoning process by
asking a series of questions critical for verifying a claim. QACHECK has five
key modules: a claim verifier, a question generator, a question-answering
module, a QA validator, and a reasoner. Users can input a claim into QACHECK,
which then predicts its veracity and provides a comprehensive report detailing
its reasoning process, guided by a sequence of (question, answer) pairs.
QACHECK also provides the source of evidence supporting each question,
fostering a transparent, explainable, and user-friendly fact-checking process.
A recorded video of QACHECK is at https://www.youtube.com/watch?v=ju8kxSldM64
Related papers
- FACTIFY-5WQA: 5W Aspect-based Fact Verification through Question
Answering [3.0401523614135333]
A human fact-checker generally follows several logical steps to verify a verisimilitude claim.
It is necessary to have an aspect-based (delineating which part(s) are true and which are false) explainable system.
In this paper, we propose a 5W framework for question-answer-based fact explainability.
arXiv Detail & Related papers (2023-05-07T16:52:21Z) - CREPE: Open-Domain Question Answering with False Presuppositions [92.20501870319765]
We introduce CREPE, a QA dataset containing a natural distribution of presupposition failures from online information-seeking forums.
We find that 25% of questions contain false presuppositions, and provide annotations for these presuppositions and their corrections.
We show that adaptations of existing open-domain QA models can find presuppositions moderately well, but struggle when predicting whether a presupposition is factually correct.
arXiv Detail & Related papers (2022-11-30T18:54:49Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - Do Answers to Boolean Questions Need Explanations? Yes [11.226970608525596]
We release a new set of annotations marking the evidence in existing TyDi QA and BoolQ datasets.
We show that our annotations can be used to train a model that extracts improved evidence spans.
arXiv Detail & Related papers (2021-12-14T22:40:28Z) - Explainable Fact-checking through Question Answering [17.1138216746642]
We propose generating questions and answers from claims and answering the same questions from evidence.
We also propose an answer comparison model with an attention mechanism attached to each question.
Experimental results show that the proposed model can achieve state-of-the-art performance while providing reasonable explainable capabilities.
arXiv Detail & Related papers (2021-10-11T15:55:11Z) - Robustifying Multi-hop QA through Pseudo-Evidentiality Training [28.584236042324896]
We study the bias problem of multi-hop question answering models, of answering correctly without correct reasoning.
We propose a new approach to learn evidentiality, deciding whether the answer prediction is supported by correct evidences.
arXiv Detail & Related papers (2021-07-07T14:15:14Z) - REM-Net: Recursive Erasure Memory Network for Commonsense Evidence
Refinement [130.8875535449478]
REM-Net is equipped with a module to refine the evidence by erasing the low-quality evidence that does not explain the question answering.
Instead of retrieving evidence from existing knowledge bases, REM-Net leverages a pre-trained generative model to generate candidate evidence customized for the question.
The results demonstrate the performance of REM-Net and show that the refined evidence is explainable.
arXiv Detail & Related papers (2020-12-24T10:07:32Z) - Generating Fact Checking Briefs [97.82546239639964]
We investigate how to increase the accuracy and efficiency of fact checking by providing information about the claim before performing the check.
We develop QABriefer, a model that generates a set of questions conditioned on the claim, searches the web for evidence, and generates answers.
We show that fact checking with briefs -- in particular QABriefs -- increases the accuracy of crowdworkers by 10% while slightly decreasing the time taken.
arXiv Detail & Related papers (2020-11-10T23:02:47Z) - Unsupervised Question Decomposition for Question Answering [102.56966847404287]
We propose an algorithm for One-to-N Unsupervised Sequence Sequence (ONUS) that learns to map one hard, multi-hop question to many simpler, single-hop sub-questions.
We show large QA improvements on HotpotQA over a strong baseline on the original, out-of-domain, and multi-hop dev sets.
arXiv Detail & Related papers (2020-02-22T19:40:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.