Automated Fact-Checking: A Survey
- URL: http://arxiv.org/abs/2109.11427v1
- Date: Thu, 23 Sep 2021 15:13:48 GMT
- Title: Automated Fact-Checking: A Survey
- Authors: Xia Zeng, Amani S. Abumansour, Arkaitz Zubiaga
- Abstract summary: Researchers in the field of Natural Language Processing (NLP) have contributed to the task by building fact-checking datasets.
This paper reviews relevant research on automated fact-checking covering both the claim detection and claim validation components.
- Score: 5.729426778193398
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As online false information continues to grow, automated fact-checking has
gained an increasing amount of attention in recent years. Researchers in the
field of Natural Language Processing (NLP) have contributed to the task by
building fact-checking datasets, devising automated fact-checking pipelines and
proposing NLP methods to further research in the development of different
components. This paper reviews relevant research on automated fact-checking
covering both the claim detection and claim validation components.
Related papers
- Automated Justification Production for Claim Veracity in Fact Checking: A Survey on Architectures and Approaches [2.0140898354987353]
Automated Fact-Checking (AFC) is the automated verification of claim accuracy.
AFC is crucial in discerning truth from misinformation, especially given the huge amounts of content are generated online daily.
Current research focuses on predicting claim veracity through metadata analysis and language scrutiny.
arXiv Detail & Related papers (2024-07-09T01:54:13Z) - FactFinders at CheckThat! 2024: Refining Check-worthy Statement Detection with LLMs through Data Pruning [43.82613670331329]
This study investigates the application of open-source language models to identify check-worthy statements from political transcriptions.
We propose a two-step data pruning approach to automatically identify high-quality training data instances for effective learning.
Our team ranked first in the check-worthiness estimation task in the English language.
arXiv Detail & Related papers (2024-06-26T12:31:31Z) - How We Refute Claims: Automatic Fact-Checking through Flaw
Identification and Explanation [4.376598435975689]
This paper explores the novel task of flaw-oriented fact-checking, including aspect generation and flaw identification.
We also introduce RefuteClaim, a new framework designed specifically for this task.
Given the absence of an existing dataset, we present FlawCheck, a dataset created by extracting and transforming insights from expert reviews into relevant aspects and identified flaws.
arXiv Detail & Related papers (2024-01-27T06:06:16Z) - Fact-Checking Complex Claims with Program-Guided Reasoning [99.7212240712869]
Program-Guided Fact-Checking (ProgramFC) is a novel fact-checking model that decomposes complex claims into simpler sub-tasks.
We first leverage the in-context learning ability of large language models to generate reasoning programs.
We execute the program by delegating each sub-task to the corresponding sub-task handler.
arXiv Detail & Related papers (2023-05-22T06:11:15Z) - Autonomation, not Automation: Activities and Needs of Fact-checkers as a Basis for Designing Human-Centered AI Systems [1.7925621668797338]
We conducted in-depth interviews with Central European fact-checkers.
Our contributions include an in-depth examination of the variability of fact-checking work in non-English speaking regions.
Thanks to the interdisciplinary collaboration, we extend the fact-checking process in AI research by three additional stages.
arXiv Detail & Related papers (2022-11-22T10:18:09Z) - CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking [55.75590135151682]
CHEF is the first CHinese Evidence-based Fact-checking dataset of 10K real-world claims.
The dataset covers multiple domains, ranging from politics to public health, and provides annotated evidence retrieved from the Internet.
arXiv Detail & Related papers (2022-06-06T09:11:03Z) - Synthetic Disinformation Attacks on Automated Fact Verification Systems [53.011635547834025]
We explore the sensitivity of automated fact-checkers to synthetic adversarial evidence in two simulated settings.
We show that these systems suffer significant performance drops against these attacks.
We discuss the growing threat of modern NLG systems as generators of disinformation.
arXiv Detail & Related papers (2022-02-18T19:01:01Z) - FacTeR-Check: Semi-automated fact-checking through Semantic Similarity
and Natural Language Inference [61.068947982746224]
FacTeR-Check enables retrieving fact-checked information, unchecked claims verification and tracking dangerous information over social media.
The architecture is validated using a new dataset called NLI19-SP that is publicly released with COVID-19 related hoaxes and tweets from Spanish social media.
Our results show state-of-the-art performance on the individual benchmarks, as well as producing useful analysis of the evolution over time of 61 different hoaxes.
arXiv Detail & Related papers (2021-10-27T15:44:54Z) - A Survey on Automated Fact-Checking [18.255327608480165]
We survey automated fact-checking stemming from natural language processing, and discuss its connections to related tasks and disciplines.
We present an overview of existing datasets and models, aiming to unify the various definitions given and identify common concepts.
arXiv Detail & Related papers (2021-08-26T16:34:51Z) - FaVIQ: FAct Verification from Information-seeking Questions [77.7067957445298]
We construct a large-scale fact verification dataset called FaVIQ using information-seeking questions posed by real users.
Our claims are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
arXiv Detail & Related papers (2021-07-05T17:31:44Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.