Generating Fact Checking Briefs
- URL: http://arxiv.org/abs/2011.05448v1
- Date: Tue, 10 Nov 2020 23:02:47 GMT
- Title: Generating Fact Checking Briefs
- Authors: Angela Fan, Aleksandra Piktus, Fabio Petroni, Guillaume Wenzek,
Marzieh Saeidi, Andreas Vlachos, Antoine Bordes, Sebastian Riedel
- Abstract summary: We investigate how to increase the accuracy and efficiency of fact checking by providing information about the claim before performing the check.
We develop QABriefer, a model that generates a set of questions conditioned on the claim, searches the web for evidence, and generates answers.
We show that fact checking with briefs -- in particular QABriefs -- increases the accuracy of crowdworkers by 10% while slightly decreasing the time taken.
- Score: 97.82546239639964
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fact checking at scale is difficult -- while the number of active fact
checking websites is growing, it remains too small for the needs of the
contemporary media ecosystem. However, despite good intentions, contributions
from volunteers are often error-prone, and thus in practice restricted to claim
detection. We investigate how to increase the accuracy and efficiency of fact
checking by providing information about the claim before performing the check,
in the form of natural language briefs. We investigate passage-based briefs,
containing a relevant passage from Wikipedia, entity-centric ones consisting of
Wikipedia pages of mentioned entities, and Question-Answering Briefs, with
questions decomposing the claim, and their answers. To produce QABriefs, we
develop QABriefer, a model that generates a set of questions conditioned on the
claim, searches the web for evidence, and generates answers. To train its
components, we introduce QABriefDataset which we collected via crowdsourcing.
We show that fact checking with briefs -- in particular QABriefs -- increases
the accuracy of crowdworkers by 10% while slightly decreasing the time taken.
For volunteer (unpaid) fact checkers, QABriefs slightly increase accuracy and
reduce the time required by around 20%.
Related papers
- Fact or Fiction? Improving Fact Verification with Knowledge Graphs through Simplified Subgraph Retrievals [0.0]
We present efficient methods for verifying claims on a dataset where the evidence is in the form of structured knowledge graphs.
By simplifying the evidence retrieval process, we are able to construct models that both require less computational resources and achieve better test-set accuracy.
arXiv Detail & Related papers (2024-08-14T10:46:15Z) - QACHECK: A Demonstration System for Question-Guided Multi-Hop
Fact-Checking [68.06355980166053]
We propose the Question-guided Multi-hop Fact-Checking (QACHECK) system.
It guides the model's reasoning process by asking a series of questions critical for verifying a claim.
It provides the source of evidence supporting each question, fostering a transparent, explainable, and user-friendly fact-checking process.
arXiv Detail & Related papers (2023-10-11T15:51:53Z) - Chain-of-Verification Reduces Hallucination in Large Language Models [80.99318041981776]
We study the ability of language models to deliberate on the responses they give in order to correct their mistakes.
We develop the Chain-of-Verification (CoVe) method whereby the model first drafts an initial response.
We show CoVe decreases hallucinations across a variety of tasks, from list-based questions from Wikidata to closed book MultiSpanQA.
arXiv Detail & Related papers (2023-09-20T17:50:55Z) - MythQA: Query-Based Large-Scale Check-Worthy Claim Detection through
Multi-Answer Open-Domain Question Answering [8.70509665552136]
Check-worthy claim detection aims at providing plausible misinformation to downstream fact-checking systems or human experts to check.
Many efforts have been put into how to identify check-worthy claims from a small scale of pre-collected claims, but how to efficiently detect check-worthy claims directly from a large-scale information source, such as Twitter, remains underexplored.
We introduce MythQA, a new multi-answer open-domain question answering(QA) task that involves contradictory stance mining for query-based large-scale check-worthy claim detection.
arXiv Detail & Related papers (2023-07-21T18:35:24Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - DialFact: A Benchmark for Fact-Checking in Dialogue [56.63709206232572]
We construct DialFact, a benchmark dataset of 22,245 annotated conversational claims, paired with pieces of evidence from Wikipedia.
We find that existing fact-checking models trained on non-dialogue data like FEVER fail to perform well on our task.
We propose a simple yet data-efficient solution to effectively improve fact-checking performance in dialogue.
arXiv Detail & Related papers (2021-10-15T17:34:35Z) - FaVIQ: FAct Verification from Information-seeking Questions [77.7067957445298]
We construct a large-scale fact verification dataset called FaVIQ using information-seeking questions posed by real users.
Our claims are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
arXiv Detail & Related papers (2021-07-05T17:31:44Z) - That is a Known Lie: Detecting Previously Fact-Checked Claims [34.30218503006579]
A large number of fact-checked claims have been accumulated.
Politicians like to repeat their favorite statements, true or false, over and over again.
It is important to try to save this effort and to avoid wasting time on claims that have already been fact-checked.
arXiv Detail & Related papers (2020-05-12T21:25:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.