The Role of Context in Detecting Previously Fact-Checked Claims
- URL: http://arxiv.org/abs/2104.07423v1
- Date: Thu, 15 Apr 2021 12:39:37 GMT
- Title: The Role of Context in Detecting Previously Fact-Checked Claims
- Authors: Shaden Shaar, Firoj Alam, Giovanni Da San Martino, Preslav Nakov
- Abstract summary: We focus on claims made in a political debate, where context really matters.
We study the impact of modeling the context of the claim both on the source side, as well as on the target side, in the fact-checking explanation document.
- Score: 27.076320857009655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent years have seen the proliferation of disinformation and misinformation
online, thanks to the freedom of expression on the Internet and to the rise of
social media. Two solutions were proposed to address the problem: (i) manual
fact-checking, which is accurate and credible, but slow and non-scalable, and
(ii) automatic fact-checking, which is fast and scalable, but lacks
explainability and credibility. With the accumulation of enough manually
fact-checked claims, a middle-ground approach has emerged: checking whether a
given claim has previously been fact-checked. This can be made automatically,
and thus fast, while also offering credibility and explainability, thanks to
the human fact-checking and explanations in the associated fact-checking
article. This is a relatively new and understudied research direction, and here
we focus on claims made in a political debate, where context really matters.
Thus, we study the impact of modeling the context of the claim: both on the
source side, i.e., in the debate, as well as on the target side, i.e., in the
fact-checking explanation document. We do this by modeling the local context,
the global context, as well as by means of co-reference resolution, and
reasoning over the target text using Transformer-XH. The experimental results
show that each of these represents a valuable information source, but that
modeling the source-side context is more important, and can yield 10+ points of
absolute improvement.
Related papers
- Automated Justification Production for Claim Veracity in Fact Checking: A Survey on Architectures and Approaches [2.0140898354987353]
Automated Fact-Checking (AFC) is the automated verification of claim accuracy.
AFC is crucial in discerning truth from misinformation, especially given the huge amounts of content are generated online daily.
Current research focuses on predicting claim veracity through metadata analysis and language scrutiny.
arXiv Detail & Related papers (2024-07-09T01:54:13Z) - Molecular Facts: Desiderata for Decontextualization in LLM Fact Verification [56.39904484784127]
We argue that fully atomic facts are not the right representation, and define two criteria for molecular facts: decontextuality, or how well they can stand alone, and minimality.
We present a baseline methodology for generating molecular facts automatically, aiming to add the right amount of information.
arXiv Detail & Related papers (2024-06-28T17:43:48Z) - Support or Refute: Analyzing the Stance of Evidence to Detect
Out-of-Context Mis- and Disinformation [13.134162427636356]
Mis- and disinformation online have become a major societal problem.
One common form of mis- and disinformation is out-of-context (OOC) information.
We propose a stance extraction network (SEN) that can extract the stances of different pieces of multi-modal evidence.
arXiv Detail & Related papers (2023-11-03T08:05:54Z) - Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for
Misinformation [67.69725605939315]
Misinformation emerges in times of uncertainty when credible information is limited.
This is challenging for NLP-based fact-checking as it relies on counter-evidence, which may not yet be available.
arXiv Detail & Related papers (2022-10-25T09:40:48Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - Synthetic Disinformation Attacks on Automated Fact Verification Systems [53.011635547834025]
We explore the sensitivity of automated fact-checkers to synthetic adversarial evidence in two simulated settings.
We show that these systems suffer significant performance drops against these attacks.
We discuss the growing threat of modern NLG systems as generators of disinformation.
arXiv Detail & Related papers (2022-02-18T19:01:01Z) - DialFact: A Benchmark for Fact-Checking in Dialogue [56.63709206232572]
We construct DialFact, a benchmark dataset of 22,245 annotated conversational claims, paired with pieces of evidence from Wikipedia.
We find that existing fact-checking models trained on non-dialogue data like FEVER fail to perform well on our task.
We propose a simple yet data-efficient solution to effectively improve fact-checking performance in dialogue.
arXiv Detail & Related papers (2021-10-15T17:34:35Z) - Towards Explainable Fact Checking [22.91475787277623]
This thesis presents my research on automatic fact checking.
It includes claim check-worthiness detection, stance detection and veracity prediction.
Its contributions go beyond fact checking, with the thesis proposing more general machine learning solutions.
arXiv Detail & Related papers (2021-08-23T16:22:50Z) - Generating Fact Checking Explanations [52.879658637466605]
A crucial piece of the puzzle that is still missing is to understand how to automate the most elaborate part of the process.
This paper provides the first study of how these explanations can be generated automatically based on available claim context.
Our results indicate that optimising both objectives at the same time, rather than training them separately, improves the performance of a fact checking system.
arXiv Detail & Related papers (2020-04-13T05:23:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.