Can online attention signals help fact-checkers fact-check?
- URL: http://arxiv.org/abs/2109.09322v2
- Date: Sat, 7 May 2022 08:55:21 GMT
- Title: Can online attention signals help fact-checkers fact-check?
- Authors: Manoel Horta Ribeiro, Savvas Zannettou, Oana Goga, Fabr\'icio
Benevenuto, Robert West
- Abstract summary: This paper proposes a framework to study fact-checking with online attention signals.
We use this framework to conduct a preliminary study of a dataset of 879 COVID-19-related fact-checks done in 2020 by 81 international organizations.
- Score: 11.65819327593072
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent research suggests that not all fact-checking efforts are equal: when
and what is fact-checked plays a pivotal role in effectively correcting
misconceptions. In that context, signals capturing how much attention specific
topics receive on the Internet have the potential to study (and possibly
support) fact-checking efforts. This paper proposes a framework to study
fact-checking with online attention signals. The framework consists of: 1)
extracting claims from fact-checking efforts; 2) linking such claims with
knowledge graph entities; and 3) estimating the online attention these entities
receive. We use this framework to conduct a preliminary study of a dataset of
879 COVID-19-related fact-checks done in 2020 by 81 international
organizations. Our findings suggest that there is often a disconnect between
online attention and fact-checking efforts. For example, in around 40% of
countries that fact-checked ten or more claims, half or more than half of the
ten most popular claims were not fact-checked. Our analysis also shows that
claims are first fact-checked after receiving, on average, 35% of the total
online attention they would eventually receive in 2020. Yet, there is a
considerable variation among claims: some were fact-checked before receiving a
surge of misinformation-induced online attention; others are fact-checked much
later. Overall, our work suggests that the incorporation of online attention
signals may help organizations assess their fact-checking efforts and choose
what and when to fact-check claims or stories. Also, in the context of
international collaboration, where claims are fact-checked multiple times
across different countries, online attention could help organizations keep
track of which claims are "migrating" between countries.
Related papers
- Lost in Translation -- Multilingual Misinformation and its Evolution [52.07628580627591]
This paper investigates the prevalence and dynamics of multilingual misinformation through an analysis of over 250,000 unique fact-checks spanning 95 languages.
We find that while the majority of misinformation claims are only fact-checked once, 11.7%, corresponding to more than 21,000 claims, are checked multiple times.
Using fact-checks as a proxy for the spread of misinformation, we find 33% of repeated claims cross linguistic boundaries.
arXiv Detail & Related papers (2023-10-27T12:21:55Z) - Missing Counter-Evidence Renders NLP Fact-Checking Unrealistic for
Misinformation [67.69725605939315]
Misinformation emerges in times of uncertainty when credible information is limited.
This is challenging for NLP-based fact-checking as it relies on counter-evidence, which may not yet be available.
arXiv Detail & Related papers (2022-10-25T09:40:48Z) - CHEF: A Pilot Chinese Dataset for Evidence-Based Fact-Checking [55.75590135151682]
CHEF is the first CHinese Evidence-based Fact-checking dataset of 10K real-world claims.
The dataset covers multiple domains, ranging from politics to public health, and provides annotated evidence retrieved from the Internet.
arXiv Detail & Related papers (2022-06-06T09:11:03Z) - Synthetic Disinformation Attacks on Automated Fact Verification Systems [53.011635547834025]
We explore the sensitivity of automated fact-checkers to synthetic adversarial evidence in two simulated settings.
We show that these systems suffer significant performance drops against these attacks.
We discuss the growing threat of modern NLG systems as generators of disinformation.
arXiv Detail & Related papers (2022-02-18T19:01:01Z) - Towards Explainable Fact Checking [22.91475787277623]
This thesis presents my research on automatic fact checking.
It includes claim check-worthiness detection, stance detection and veracity prediction.
Its contributions go beyond fact checking, with the thesis proposing more general machine learning solutions.
arXiv Detail & Related papers (2021-08-23T16:22:50Z) - The False COVID-19 Narratives That Keep Being Debunked: A Spatiotemporal Analysis [6.315106341032209]
This study examines the database of CoronaVirusFacts Alliance, which contains 10,381 debunks related to COVID-19.
We find that similar or nearly duplicate false COVID-19 narratives have been spreading in multiple modalities and on various social media platforms in different countries.
We propose the idea of including a multilingual debunk search tool in the fact-checking pipeline.
arXiv Detail & Related papers (2021-07-26T16:20:44Z) - The Role of Context in Detecting Previously Fact-Checked Claims [27.076320857009655]
We focus on claims made in a political debate, where context really matters.
We study the impact of modeling the context of the claim both on the source side, as well as on the target side, in the fact-checking explanation document.
arXiv Detail & Related papers (2021-04-15T12:39:37Z) - Automated Fact-Checking for Assisting Human Fact-Checkers [29.53133956640405]
misuse of media to spread inaccurate or misleading claims has led to the modern incarnation of the fact-checker.
We survey the available intelligent technologies that can support the human expert in the different steps of her fact-checking endeavor.
In each case, we pay attention to the challenges in future work and the potential impact on real-world fact-checking.
arXiv Detail & Related papers (2021-03-13T18:29:14Z) - Generating Fact Checking Briefs [97.82546239639964]
We investigate how to increase the accuracy and efficiency of fact checking by providing information about the claim before performing the check.
We develop QABriefer, a model that generates a set of questions conditioned on the claim, searches the web for evidence, and generates answers.
We show that fact checking with briefs -- in particular QABriefs -- increases the accuracy of crowdworkers by 10% while slightly decreasing the time taken.
arXiv Detail & Related papers (2020-11-10T23:02:47Z) - That is a Known Lie: Detecting Previously Fact-Checked Claims [34.30218503006579]
A large number of fact-checked claims have been accumulated.
Politicians like to repeat their favorite statements, true or false, over and over again.
It is important to try to save this effort and to avoid wasting time on claims that have already been fact-checked.
arXiv Detail & Related papers (2020-05-12T21:25:37Z) - Claim Check-Worthiness Detection as Positive Unlabelled Learning [53.24606510691877]
Claim check-worthiness detection is a critical component of fact checking systems.
We illuminate a central challenge in claim check-worthiness detection underlying all of these tasks.
Our best performing method is a unified approach which automatically corrects for this using a variant of positive unlabelled learning.
arXiv Detail & Related papers (2020-03-05T16:06:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.