Explainable Automated Fact-Checking: A Survey
- URL: http://arxiv.org/abs/2011.03870v1
- Date: Sat, 7 Nov 2020 23:56:02 GMT
- Title: Explainable Automated Fact-Checking: A Survey
- Authors: Neema Kotonya and Francesca Toni
- Abstract summary: We focus on the explanation functionality -- that is fact-checking systems providing reasons for their predictions.
We summarize existing methods for explaining the predictions of fact-checking systems and explore trends in this topic.
We propose further research directions for generating fact-checking explanations, and describe how these may lead to improvements in the research area.
- Score: 11.529816799331979
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A number of exciting advances have been made in automated fact-checking
thanks to increasingly larger datasets and more powerful systems, leading to
improvements in the complexity of claims which can be accurately fact-checked.
However, despite these advances, there are still desirable functionalities
missing from the fact-checking pipeline. In this survey, we focus on the
explanation functionality -- that is fact-checking systems providing reasons
for their predictions. We summarize existing methods for explaining the
predictions of fact-checking systems and we explore trends in this topic.
Further, we consider what makes for good explanations in this specific domain
through a comparative analysis of existing fact-checking explanations against
some desirable properties. Finally, we propose further research directions for
generating fact-checking explanations, and describe how these may lead to
improvements in the research area.
Related papers
- Show Me the Work: Fact-Checkers' Requirements for Explainable Automated Fact-Checking [43.300457630671154]
Large language models and generative AI in online media have amplified the need for effective automated fact-checking.
It is unclear how these explanations should align with the decision-making and reasoning processes of fact-checkers.
arXiv Detail & Related papers (2025-02-13T08:56:25Z) - Automated Justification Production for Claim Veracity in Fact Checking: A Survey on Architectures and Approaches [2.0140898354987353]
Automated Fact-Checking (AFC) is the automated verification of claim accuracy.
AFC is crucial in discerning truth from misinformation, especially given the huge amounts of content are generated online daily.
Current research focuses on predicting claim veracity through metadata analysis and language scrutiny.
arXiv Detail & Related papers (2024-07-09T01:54:13Z) - RU22Fact: Optimizing Evidence for Multilingual Explainable Fact-Checking on Russia-Ukraine Conflict [34.2739191920746]
High-quality evidence plays a vital role in enhancing fact-checking systems.
We propose a method based on a Large Language Model to automatically retrieve and summarize evidence from the Web.
We construct RU22Fact, a novel explainable fact-checking dataset on the Russia-Ukraine conflict in 2022 of 16K samples.
arXiv Detail & Related papers (2024-03-25T11:56:29Z) - Can LLMs Produce Faithful Explanations For Fact-checking? Towards
Faithful Explainable Fact-Checking via Multi-Agent Debate [75.10515686215177]
Large Language Models (LLMs) excel in text generation, but their capability for producing faithful explanations in fact-checking remains underexamined.
We propose the Multi-Agent Debate Refinement (MADR) framework, leveraging multiple LLMs as agents with diverse roles.
MADR ensures that the final explanation undergoes rigorous validation, significantly reducing the likelihood of unfaithful elements and aligning closely with the provided evidence.
arXiv Detail & Related papers (2024-02-12T04:32:33Z) - Explaining Recommendation System Using Counterfactual Textual
Explanations [4.318555434063274]
It is found that if end-users understand the reason for the production of some output, it is easier to trust the system.
One method for producing a more explainable output is using counterfactual reasoning.
arXiv Detail & Related papers (2023-03-14T06:45:28Z) - The Unreliability of Explanations in Few-Shot In-Context Learning [50.77996380021221]
We focus on two NLP tasks that involve reasoning over text, namely question answering and natural language inference.
We show that explanations judged as good by humans--those that are logically consistent with the input--usually indicate more accurate predictions.
We present a framework for calibrating model predictions based on the reliability of the explanations.
arXiv Detail & Related papers (2022-05-06T17:57:58Z) - Explainability in Process Outcome Prediction: Guidelines to Obtain
Interpretable and Faithful Models [77.34726150561087]
We define explainability through the interpretability of the explanations and the faithfulness of the explainability model in the field of process outcome prediction.
This paper contributes a set of guidelines named X-MOP which allows selecting the appropriate model based on the event log specifications.
arXiv Detail & Related papers (2022-03-30T05:59:50Z) - DeSePtion: Dual Sequence Prediction and Adversarial Examples for
Improved Fact-Checking [46.13738685855884]
We show that current systems for fact-checking are vulnerable to three categories of realistic challenges for fact-checking.
We present a system designed to be resilient to these "attacks" using multiple pointer networks for document selection.
We find that in handling these attacks we obtain state-of-the-art results on FEVER, largely due to improved evidence retrieval.
arXiv Detail & Related papers (2020-04-27T15:18:49Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.