Truth with a Twist: The Rhetoric of Persuasion in Professional vs. Community-Authored Fact-Checks
- URL: http://arxiv.org/abs/2601.14105v2
- Date: Tue, 27 Jan 2026 10:46:38 GMT
- Title: Truth with a Twist: The Rhetoric of Persuasion in Professional vs. Community-Authored Fact-Checks
- Authors: Olesya Razuvayevskaya, Kalina Bontcheva,
- Abstract summary: We quantify the prevalence and types of persuasion techniques across fact-checking ecosystems.<n>We find no evidence that community-produced debunks rely more heavily on subjective or persuasive wording.<n>Crowd raters are effective at penalising the use of particular problematic rhetorical means.
- Score: 4.26112475135805
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This study presents the first large-scale comparison of persuasion techniques present in crowd- versus professionally-written debunks. Using extensive datasets from Community Notes (CNs), EUvsDisinfo, and the Database of Known Fakes (DBKF), we quantify the prevalence and types of persuasion techniques across these fact-checking ecosystems. Contrary to prior hypothesis that community-produced debunks rely more heavily on subjective or persuasive wording, we find no evidence that CNs contain a higher average number of persuasion techniques than professional fact-checks. We additionally identify systematic rhetorical differences between CNs and professional debunking efforts, reflecting differences in institutional norms and topical coverage. Finally, we examine how the crowd evaluates persuasive language in CNs and show that, although notes with more persuasive elements receive slightly higher overall helpfulness ratings, crowd raters are effective at penalising the use of particular problematic rhetorical means
Related papers
- LLM-Based Adversarial Persuasion Attacks on Fact-Checking Systems [9.795192821776462]
We introduce a novel class of persuasive adversarial attacks on automated fact-checking systems.<n>We study the effects of persuasion on both claim verification and evidence retrieval using a decoupled evaluation strategy.<n>Our analysis identifies persuasion techniques as a potent class of adversarial attacks, highlighting the need for more robust AFC systems.
arXiv Detail & Related papers (2026-01-23T16:57:16Z) - The Table of Media Bias Elements: A sentence-level taxonomy of media bias types and propaganda techniques [0.5524804393257919]
We aim to shift the focus from where an outlet allegedly stands to how partiality is expressed in individual sentences.<n>We iteratively combine close-reading, interdisciplinary theory and pilot annotation to derive a fine-grained, sentence-level taxonomy of media bias and propaganda.<n>The result is a two-tier schema comprising 38 elementary bias types, arranged in six functional families and visualised as a "table of media-bias elements"
arXiv Detail & Related papers (2026-01-08T20:18:55Z) - Fine-grained Narrative Classification in Biased News Articles [10.412867371293629]
We propose a novel fine-grained narrative classification in biased news articles.<n>We also explore article-bias classification as the precursor task to narrative classification.<n>We develop INDI-PROP, the first ideologically grounded fine-grained narrative dataset.
arXiv Detail & Related papers (2025-12-03T09:07:52Z) - MMPersuade: A Dataset and Evaluation Framework for Multimodal Persuasion [73.99171322670772]
Large Vision-Language Models (LVLMs) are increasingly deployed in domains such as shopping, health, and news.<n> MMPersuade provides a unified framework for systematically studying multimodal persuasion dynamics in LVLMs.
arXiv Detail & Related papers (2025-10-26T17:39:21Z) - Persuasiveness and Bias in LLM: Investigating the Impact of Persuasiveness and Reinforcement of Bias in Language Models [0.0]
This work examines how persuasion and bias interact in Large Language Models (LLMs)<n>LLMs now generate convincing, human-like text and are widely used in content creation, decision support, and user interactions.<n>We test whether persona-based models can persuade with fact-based claims while also, unintentionally, promoting misinformation or biased narratives.
arXiv Detail & Related papers (2025-08-13T13:30:49Z) - Disparities in Peer Review Tone and the Role of Reviewer Anonymity [0.0]
This study examines more than 80,000 reviews in two major journals.<n>It uncovers how review tone, sentiment, and supportive language vary across author demographics.
arXiv Detail & Related papers (2025-07-19T20:19:21Z) - Can Community Notes Replace Professional Fact-Checkers? [49.5332225129956]
Policy changes by Twitter/X and Meta signal a shift away from partnerships with fact-checking organisations.<n>Our analysis reveals that community notes cite fact-checking sources up to five times more than previously reported.<n>Our results show that successful community moderation relies on professional fact-checking and highlight how citizen and professional fact-checking are deeply intertwined.
arXiv Detail & Related papers (2025-02-19T22:26:39Z) - GPTBIAS: A Comprehensive Framework for Evaluating Bias in Large Language
Models [83.30078426829627]
Large language models (LLMs) have gained popularity and are being widely adopted by a large user community.
The existing evaluation methods have many constraints, and their results exhibit a limited degree of interpretability.
We propose a bias evaluation framework named GPTBIAS that leverages the high performance of LLMs to assess bias in models.
arXiv Detail & Related papers (2023-12-11T12:02:14Z) - SCITAB: A Challenging Benchmark for Compositional Reasoning and Claim
Verification on Scientific Tables [68.76415918462418]
We present SCITAB, a challenging evaluation dataset consisting of 1.2K expert-verified scientific claims.
Through extensive evaluations, we demonstrate that SCITAB poses a significant challenge to state-of-the-art models.
Our analysis uncovers several unique challenges posed by SCITAB, including table grounding, claim ambiguity, and compositional reasoning.
arXiv Detail & Related papers (2023-05-22T16:13:50Z) - Evaluate Confidence Instead of Perplexity for Zero-shot Commonsense
Reasoning [85.1541170468617]
This paper reconsiders the nature of commonsense reasoning and proposes a novel commonsense reasoning metric, Non-Replacement Confidence (NRC)
Our proposed novel method boosts zero-shot performance on two commonsense reasoning benchmark datasets and further seven commonsense question-answering datasets.
arXiv Detail & Related papers (2022-08-23T14:42:14Z) - Persua: A Visual Interactive System to Enhance the Persuasiveness of
Arguments in Online Discussion [52.49981085431061]
Enhancing people's ability to write persuasive arguments could contribute to the effectiveness and civility in online communication.
We derived four design goals for a tool that helps users improve the persuasiveness of arguments in online discussions.
Persua is an interactive visual system that provides example-based guidance on persuasive strategies to enhance the persuasiveness of arguments.
arXiv Detail & Related papers (2022-04-16T08:07:53Z) - Uncovering Latent Biases in Text: Method and Application to Peer Review [38.726731935235584]
We introduce a novel framework to quantify bias in text caused by the visibility of subgroup membership indicators.
We apply our framework to quantify biases in the text of peer reviews from a reputed machine learning conference.
arXiv Detail & Related papers (2020-10-29T01:24:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.