How We Refute Claims: Automatic Fact-Checking through Flaw
Identification and Explanation
- URL: http://arxiv.org/abs/2401.15312v1
- Date: Sat, 27 Jan 2024 06:06:16 GMT
- Title: How We Refute Claims: Automatic Fact-Checking through Flaw
Identification and Explanation
- Authors: Wei-Yu Kao and An-Zi Yen
- Abstract summary: This paper explores the novel task of flaw-oriented fact-checking, including aspect generation and flaw identification.
We also introduce RefuteClaim, a new framework designed specifically for this task.
Given the absence of an existing dataset, we present FlawCheck, a dataset created by extracting and transforming insights from expert reviews into relevant aspects and identified flaws.
- Score: 4.376598435975689
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Automated fact-checking is a crucial task in the governance of internet
content. Although various studies utilize advanced models to tackle this issue,
a significant gap persists in addressing complex real-world rumors and
deceptive claims. To address this challenge, this paper explores the novel task
of flaw-oriented fact-checking, including aspect generation and flaw
identification. We also introduce RefuteClaim, a new framework designed
specifically for this task. Given the absence of an existing dataset, we
present FlawCheck, a dataset created by extracting and transforming insights
from expert reviews into relevant aspects and identified flaws. The
experimental results underscore the efficacy of RefuteClaim, particularly in
classifying and elucidating false claims.
Related papers
- FactLens: Benchmarking Fine-Grained Fact Verification [6.814173254027381]
We advocate for a shift toward fine-grained verification, where complex claims are broken down into smaller sub-claims for individual verification.
We introduce FactLens, a benchmark for evaluating fine-grained fact verification, with metrics and automated evaluators of sub-claim quality.
Our results show alignment between automated FactLens evaluators and human judgments, and we discuss the impact of sub-claim characteristics on the overall verification performance.
arXiv Detail & Related papers (2024-11-08T21:26:57Z) - Augmenting the Veracity and Explanations of Complex Fact Checking via Iterative Self-Revision with LLMs [10.449165630417522]
We construct two complex fact-checking datasets in the Chinese scenarios: CHEF-EG and TrendFact.
These datasets involve complex facts in areas such as health, politics, and society.
We propose a unified framework called FactISR to perform mutual feedback between veracity and explanations.
arXiv Detail & Related papers (2024-10-19T15:25:19Z) - Contrastive Learning to Improve Retrieval for Real-world Fact Checking [84.57583869042791]
We present Contrastive Fact-Checking Reranker (CFR), an improved retriever for fact-checking complex claims.
We leverage the AVeriTeC dataset, which annotates subquestions for claims with human written answers from evidence documents.
We find a 6% improvement in veracity classification accuracy on the dataset.
arXiv Detail & Related papers (2024-10-07T00:09:50Z) - EX-FEVER: A Dataset for Multi-hop Explainable Fact Verification [22.785622371421876]
We present a pioneering dataset for multi-hop explainable fact verification.
With over 60,000 claims involving 2-hop and 3-hop reasoning, each is created by summarizing and modifying information from hyperlinked Wikipedia documents.
We demonstrate a novel baseline system on our EX-FEVER dataset, showcasing document retrieval, explanation generation, and claim verification.
arXiv Detail & Related papers (2023-10-15T06:46:15Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - Synthetic Disinformation Attacks on Automated Fact Verification Systems [53.011635547834025]
We explore the sensitivity of automated fact-checkers to synthetic adversarial evidence in two simulated settings.
We show that these systems suffer significant performance drops against these attacks.
We discuss the growing threat of modern NLG systems as generators of disinformation.
arXiv Detail & Related papers (2022-02-18T19:01:01Z) - DialFact: A Benchmark for Fact-Checking in Dialogue [56.63709206232572]
We construct DialFact, a benchmark dataset of 22,245 annotated conversational claims, paired with pieces of evidence from Wikipedia.
We find that existing fact-checking models trained on non-dialogue data like FEVER fail to perform well on our task.
We propose a simple yet data-efficient solution to effectively improve fact-checking performance in dialogue.
arXiv Detail & Related papers (2021-10-15T17:34:35Z) - The Case for Claim Difficulty Assessment in Automatic Fact Checking [18.230039157836888]
We argue that prediction of claim difficulty is a missing component of today's automated fact-checking architectures.
We describe how this difficulty prediction task might be split into a set of distinct subtasks.
arXiv Detail & Related papers (2021-09-20T16:59:50Z) - FaVIQ: FAct Verification from Information-seeking Questions [77.7067957445298]
We construct a large-scale fact verification dataset called FaVIQ using information-seeking questions posed by real users.
Our claims are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
arXiv Detail & Related papers (2021-07-05T17:31:44Z) - A Review on Fact Extraction and Verification [19.373340472113703]
We study the fact checking problem, which aims to identify the veracity of a given claim.
We focus on the task of Fact Extraction and VERification (FEVER) and its accompanied dataset.
This task is essential and can be the building block of applications such as fake news detection and medical claim verification.
arXiv Detail & Related papers (2020-10-06T20:05:43Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.