The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who
- URL: http://arxiv.org/abs/2304.14238v2
- Date: Wed, 8 Nov 2023 12:10:49 GMT
- Title: The Intended Uses of Automated Fact-Checking Artefacts: Why, How and Who
- Authors: Michael Schlichtkrull, Nedjma Ousidhoum, Andreas Vlachos
- Abstract summary: Automated fact-checking is often presented as an tool that fact-checkers, social media consumers, and other stakeholders can use to fight misinformation.
We document this by analysing 100 highly-cited papers, and annotating elements related to intended use.
We argue that this vagueness actively hinders the technology from reaching its goals, as it encourages overclaiming, limits criticism, and prevents stakeholder feedback.
- Score: 12.55428670523982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Automated fact-checking is often presented as an epistemic tool that
fact-checkers, social media consumers, and other stakeholders can use to fight
misinformation. Nevertheless, few papers thoroughly discuss how. We document
this by analysing 100 highly-cited papers, and annotating epistemic elements
related to intended use, i.e., means, ends, and stakeholders. We find that
narratives leaving out some of these aspects are common, that many papers
propose inconsistent means and ends, and that the feasibility of suggested
strategies rarely has empirical backing. We argue that this vagueness actively
hinders the technology from reaching its goals, as it encourages overclaiming,
limits criticism, and prevents stakeholder feedback. Accordingly, we provide
several recommendations for thinking and writing about the use of fact-checking
artefacts.
Related papers
- ClaimVer: Explainable Claim-Level Verification and Evidence Attribution of Text Through Knowledge Graphs [13.608282497568108]
ClaimVer is a human-centric framework tailored to meet users' informational and verification needs.
It highlights each claim, verifies it against a trusted knowledge graph, and provides succinct, clear explanations for each claim prediction.
arXiv Detail & Related papers (2024-03-12T17:07:53Z) - CausalCite: A Causal Formulation of Paper Citations [80.82622421055734]
CausalCite is a new way to measure the significance of a paper by assessing the causal impact of the paper on its follow-up papers.
It is based on a novel causal inference method, TextMatch, which adapts the traditional matching framework to high-dimensional text embeddings.
We demonstrate the effectiveness of CausalCite on various criteria, such as high correlation with paper impact as reported by scientific experts.
arXiv Detail & Related papers (2023-11-05T23:09:39Z) - No more Reviewer #2: Subverting Automatic Paper-Reviewer Assignment
using Adversarial Learning [25.70062566419791]
We show that this automation can be manipulated using adversarial learning.
We propose an attack that adapts a given paper so that it misleads the assignment and selects its own reviewers.
arXiv Detail & Related papers (2023-03-25T11:34:27Z) - Tradeoffs in Preventing Manipulation in Paper Bidding for Reviewer
Assignment [89.38213318211731]
Despite the benefits of using bids, reliance on paper bidding can allow malicious reviewers to manipulate the paper assignment for unethical purposes.
Several different approaches to preventing this manipulation have been proposed and deployed.
In this paper, we enumerate certain desirable properties that algorithms for addressing bid manipulation should satisfy.
arXiv Detail & Related papers (2022-07-22T19:58:17Z) - A Dataset on Malicious Paper Bidding in Peer Review [84.68308372858755]
Malicious reviewers strategically bid in order to unethically manipulate the paper assignment.
A critical impediment towards creating and evaluating methods to mitigate this issue is the lack of publicly-available data on malicious paper bidding.
We release a novel dataset, collected from a mock conference activity where participants were instructed to bid either honestly or maliciously.
arXiv Detail & Related papers (2022-06-24T20:23:33Z) - Generating Literal and Implied Subquestions to Fact-check Complex Claims [64.81832149826035]
We focus on decomposing a complex claim into a comprehensive set of yes-no subquestions whose answers influence the veracity of the claim.
We present ClaimDecomp, a dataset of decompositions for over 1000 claims.
We show that these subquestions can help identify relevant evidence to fact-check the full claim and derive the veracity through their answers.
arXiv Detail & Related papers (2022-05-14T00:40:57Z) - Technological Factors Influencing Videoconferencing and Zoom Fatigue [60.34717956708476]
The paper presents a conceptual, multidimensional approach to understand the technological factors that are assumed to or even have been proven to contribute to Zoom Fatigue (ZF) or more generally Videoconferencing Fatigue (VCF)
The paper is motivated by the fact that some of the media outlets initially starting the debate on what Zoom fatigue is and how it can be avoided, as well as some of the scientific papers addressing the topic, contain assumptions that are rather hypothetical and insufficiently underpinned by scientific evidence.
arXiv Detail & Related papers (2022-02-03T18:02:59Z) - Dynamics of Cross-Platform Attention to Retracted Papers [25.179837269945015]
Retracted papers circulate widely on social media, digital news and other websites before their official retraction.
We quantify the amount and type of attention 3,851 retracted papers received over time in different online platforms.
arXiv Detail & Related papers (2021-10-15T01:40:20Z) - Making Paper Reviewing Robust to Bid Manipulation Attacks [44.34601846490532]
Anecdotal evidence suggests that some reviewers bid on papers by "friends" or colluding authors.
We develop a novel approach for paper bidding and assignment that is much more robust against such attacks.
In addition to being more robust, the quality of our paper review assignments is comparable to that of current, non-robust assignment approaches.
arXiv Detail & Related papers (2021-02-09T21:24:16Z) - Detecting Cross-Modal Inconsistency to Defend Against Neural Fake News [57.9843300852526]
We introduce the more realistic and challenging task of defending against machine-generated news that also includes images and captions.
To identify the possible weaknesses that adversaries can exploit, we create a NeuralNews dataset composed of 4 different types of generated articles.
In addition to the valuable insights gleaned from our user study experiments, we provide a relatively effective approach based on detecting visual-semantic inconsistencies.
arXiv Detail & Related papers (2020-09-16T14:13:15Z) - Topic Detection and Summarization of User Reviews [6.779855791259679]
We propose an effective new summarization method by analyzing both reviews and summaries.
A new dataset comprising product reviews and summaries about 1028 products are collected from Amazon and CNET.
arXiv Detail & Related papers (2020-05-30T02:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.