Towards Explainable Fact Checking
- URL: http://arxiv.org/abs/2108.10274v1
- Date: Mon, 23 Aug 2021 16:22:50 GMT
- Title: Towards Explainable Fact Checking
- Authors: Isabelle Augenstein
- Abstract summary: This thesis presents my research on automatic fact checking.
It includes claim check-worthiness detection, stance detection and veracity prediction.
Its contributions go beyond fact checking, with the thesis proposing more general machine learning solutions.
- Score: 22.91475787277623
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The past decade has seen a substantial rise in the amount of mis- and
disinformation online, from targeted disinformation campaigns to influence
politics, to the unintentional spreading of misinformation about public health.
This development has spurred research in the area of automatic fact checking,
from approaches to detect check-worthy claims and determining the stance of
tweets towards claims, to methods to determine the veracity of claims given
evidence documents. These automatic methods are often content-based, using
natural language processing methods, which in turn utilise deep neural networks
to learn higher-order features from text in order to make predictions. As deep
neural networks are black-box models, their inner workings cannot be easily
explained. At the same time, it is desirable to explain how they arrive at
certain decisions, especially if they are to be used for decision making. While
this has been known for some time, the issues this raises have been exacerbated
by models increasing in size, and by EU legislation requiring models to be used
for decision making to provide explanations, and, very recently, by legislation
requiring online platforms operating in the EU to provide transparent reporting
on their services. Despite this, current solutions for explainability are still
lacking in the area of fact checking. This thesis presents my research on
automatic fact checking, including claim check-worthiness detection, stance
detection and veracity prediction. Its contributions go beyond fact checking,
with the thesis proposing more general machine learning solutions for natural
language processing in the area of learning with limited labelled data.
Finally, the thesis presents some first solutions for explainable fact
checking.
Related papers
- FactLLaMA: Optimizing Instruction-Following Language Models with
External Knowledge for Automated Fact-Checking [10.046323978189847]
We propose combining the power of instruction-following language models with external evidence retrieval to enhance fact-checking performance.
Our approach involves leveraging search engines to retrieve relevant evidence for a given input claim.
Then, we instruct-tune an open-sourced language model, called LLaMA, using this evidence, enabling it to predict the veracity of the input claim more accurately.
arXiv Detail & Related papers (2023-09-01T04:14:39Z) - Mitigating Temporal Misalignment by Discarding Outdated Facts [58.620269228776294]
Large language models are often used under temporal misalignment, tasked with answering questions about the present.
We propose fact duration prediction: the task of predicting how long a given fact will remain true.
Our data and code are released publicly at https://github.com/mikejqzhang/mitigating_misalignment.
arXiv Detail & Related papers (2023-05-24T07:30:08Z) - ExClaim: Explainable Neural Claim Verification Using Rationalization [8.369720566612111]
ExClaim attempts to provide an explainable claim verification system with foundational evidence.
Inspired by the legal system, ExClaim leverages rationalization to provide a verdict for the claim.
Statistical and Explainable AI (XAI) evaluations are conducted to ensure valid and trustworthy outcomes.
arXiv Detail & Related papers (2023-01-21T08:26:27Z) - CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking [55.75590135151682]
CHEF is the first CHinese Evidence-based Fact-checking dataset of 10K real-world claims.
The dataset covers multiple domains, ranging from politics to public health, and provides annotated evidence retrieved from the Internet.
arXiv Detail & Related papers (2022-06-06T09:11:03Z) - Synthetic Disinformation Attacks on Automated Fact Verification Systems [53.011635547834025]
We explore the sensitivity of automated fact-checkers to synthetic adversarial evidence in two simulated settings.
We show that these systems suffer significant performance drops against these attacks.
We discuss the growing threat of modern NLG systems as generators of disinformation.
arXiv Detail & Related papers (2022-02-18T19:01:01Z) - A Survey on Automated Fact-Checking [18.255327608480165]
We survey automated fact-checking stemming from natural language processing, and discuss its connections to related tasks and disciplines.
We present an overview of existing datasets and models, aiming to unify the various definitions given and identify common concepts.
arXiv Detail & Related papers (2021-08-26T16:34:51Z) - FaVIQ: FAct Verification from Information-seeking Questions [77.7067957445298]
We construct a large-scale fact verification dataset called FaVIQ using information-seeking questions posed by real users.
Our claims are verified to be natural, contain little lexical bias, and require a complete understanding of the evidence for verification.
arXiv Detail & Related papers (2021-07-05T17:31:44Z) - The Role of Context in Detecting Previously Fact-Checked Claims [27.076320857009655]
We focus on claims made in a political debate, where context really matters.
We study the impact of modeling the context of the claim both on the source side, as well as on the target side, in the fact-checking explanation document.
arXiv Detail & Related papers (2021-04-15T12:39:37Z) - Machine Learning Explanations to Prevent Overtrust in Fake News
Detection [64.46876057393703]
This research investigates the effects of an Explainable AI assistant embedded in news review platforms for combating the propagation of fake news.
We design a news reviewing and sharing interface, create a dataset of news stories, and train four interpretable fake news detection algorithms.
For a deeper understanding of Explainable AI systems, we discuss interactions between user engagement, mental model, trust, and performance measures in the process of explaining.
arXiv Detail & Related papers (2020-07-24T05:42:29Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.