Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
- URL: http://arxiv.org/abs/2502.09083v1
- Date: Thu, 13 Feb 2025 08:56:25 GMT
- Title: Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking
- Authors: Greta Warren, Irina Shklovski, Isabelle Augenstein,
- Abstract summary: Large language models and generative AI in online media have amplified the need for effective automated fact-checking.
It is unclear how these explanations should align with the decision-making and reasoning processes of fact-checkers.
- Score: 43.300457630671154
- License:
- Abstract: The pervasiveness of large language models and generative AI in online media has amplified the need for effective automated fact-checking to assist fact-checkers in tackling the increasing volume and sophistication of misinformation. The complex nature of fact-checking demands that automated fact-checking systems provide explanations that enable fact-checkers to scrutinise their outputs. However, it is unclear how these explanations should align with the decision-making and reasoning processes of fact-checkers to be effectively integrated into their workflows. Through semi-structured interviews with fact-checking professionals, we bridge this gap by: (i) providing an account of how fact-checkers assess evidence, make decisions, and explain their processes; (ii) examining how fact-checkers use automated tools in practice; and (iii) identifying fact-checker explanation requirements for automated fact-checking tools. The findings show unmet explanation needs and identify important criteria for replicable fact-checking explanations that trace the model's reasoning path, reference specific evidence, and highlight uncertainty and information gaps.
Related papers
- The Perils & Promises of Fact-checking with Large Language Models [55.869584426820715]
Large Language Models (LLMs) are increasingly trusted to write academic papers, lawsuits, and news articles.
We evaluate the use of LLM agents in fact-checking by having them phrase queries, retrieve contextual data, and make decisions.
Our results show the enhanced prowess of LLMs when equipped with contextual information.
While LLMs show promise in fact-checking, caution is essential due to inconsistent accuracy.
arXiv Detail & Related papers (2023-10-20T14:49:47Z) - Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for
Misinformation [67.69725605939315]
Misinformation emerges in times of uncertainty when credible information is limited.
This is challenging for NLP-based fact-checking as it relies on counter-evidence, which may not yet be available.
arXiv Detail & Related papers (2022-10-25T09:40:48Z) - Synthetic Disinformation Attacks on Automated Fact Verification Systems [53.011635547834025]
We explore the sensitivity of automated fact-checkers to synthetic adversarial evidence in two simulated settings.
We show that these systems suffer significant performance drops against these attacks.
We discuss the growing threat of modern NLG systems as generators of disinformation.
arXiv Detail & Related papers (2022-02-18T19:01:01Z) - Automated Fact-Checking: A Survey [5.729426778193398]
Researchers in the field of Natural Language Processing (NLP) have contributed to the task by building fact-checking datasets.
This paper reviews relevant research on automated fact-checking covering both the claim detection and claim validation components.
arXiv Detail & Related papers (2021-09-23T15:13:48Z) - A Survey on Automated Fact-Checking [18.255327608480165]
We survey automated fact-checking stemming from natural language processing, and discuss its connections to related tasks and disciplines.
We present an overview of existing datasets and models, aiming to unify the various definitions given and identify common concepts.
arXiv Detail & Related papers (2021-08-26T16:34:51Z) - FaVIQ: FAct Verification from Information-seeking Questions [77.7067957445298]
We construct a large-scale fact verification dataset called FaVIQ using information-seeking questions posed by real users.
Our claims are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
arXiv Detail & Related papers (2021-07-05T17:31:44Z) - Explainable Automated Fact-Checking: A Survey [11.529816799331979]
We focus on the explanation functionality -- that is fact-checking systems providing reasons for their predictions.
We summarize existing methods for explaining the predictions of fact-checking systems and explore trends in this topic.
We propose further research directions for generating fact-checking explanations, and describe how these may lead to improvements in the research area.
arXiv Detail & Related papers (2020-11-07T23:56:02Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.