Hyperactive Minority Alter the Stability of Community Notes
- URL: http://arxiv.org/abs/2602.08970v1
- Date: Mon, 09 Feb 2026 18:04:54 GMT
- Title: Hyperactive Minority Alter the Stability of Community Notes
- Authors: Jacopo Nudo, Eugenio Nerio Nemmi, Edoardo Loru, Alessandro Mei, Walter Quattrociocchi, Matteo Cinelli,
- Abstract summary: We study the emergence and visibility of Community Notes on X.<n>We show that contribution activity is highly concentrated.<n>We replicate the notes' emergence process by integrating the open-source implementation of the Community Notes consensus algorithm.
- Score: 39.13508775153173
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As platforms increasingly scale down professional fact-checking, community-based alternatives are promoted as more transparent and democratic. The main substitute being proposed is community-based contextualization, most notably Community Notes on X, where users write annotations and collectively rate their helpfulness under a consensus-oriented algorithm. This shift raises a basic empirical question: to what extent do users' social dynamics affect the emergence of Community Notes? We address this question by characterizing participation and political behavior, using the full public release of notes and ratings (between 2021 and 2025). We show that contribution activity is highly concentrated: a small minority of users accounts for a disproportionate share of ratings. Crucially, these high-activity contributors are not neutral volunteers: they are selective in the content they engage with and substantially more politically polarized than the overall contributor population. We replicate the notes' emergence process by integrating the open-source implementation of the Community Notes consensus algorithm used in production. This enables us to conduct counterfactual simulations that modify the display status of notes by varying the pool of raters. Our results reveal that the system is structurally unstable: the emergence and visibility of notes often depend on the behavior of a few dozen highly active users, and even minor perturbations in their participation can lead to markedly different outcomes. In sum, rather than decentralizing epistemic authority, community-based fact-checking on X reconfigures it, concentrating substantial power in the hands of a small, polarized group of highly active contributors.
Related papers
- The Benefit of Collective Intelligence in Community-Based Content Moderation is Limited by Overt Political Signalling [0.0]
We show that community-based content moderation systems can allow political biases to influence the development of notes and the rating processes.<n>We conduct an online experiment in which participants jointly authored notes on political posts.<n>We find that politically diverse teams perform better when evaluating Republican posts, while group composition does not affect perceived note quality for Democrat posts.
arXiv Detail & Related papers (2026-01-29T16:23:50Z) - Reddit Deplatforming and Toxicity Dynamics on Generalist Voat Communities [73.88859384645264]
Deplatforming, the permanent banning of entire communities, is a primary tool for content moderation on mainstream platforms.<n>We analyze four major Reddit ban waves (2015--2020) and their effects on generalist communities on Voat.
arXiv Detail & Related papers (2025-12-26T19:13:45Z) - Community Notes are Vulnerable to Rater Bias and Manipulation [75.34858521118305]
We evaluate the Community Notes algorithm using simulated data that models realistic rater and note behaviors.<n>We find that the algorithm suppresses a substantial fraction of genuinely helpful notes and is highly sensitive to rater biases.
arXiv Detail & Related papers (2025-11-04T14:39:34Z) - Community Moderation and the New Epistemology of Fact Checking on Social Media [124.26693978503339]
Social media platforms have traditionally relied on independent fact-checking organizations to identify and flag misleading content.<n>X (formerly Twitter) and Meta have shifted towards community-driven content moderation by launching their own versions of crowd-sourced fact-checking.<n>We examine the current approaches to misinformation detection across major platforms, explore the emerging role of community-driven moderation, and critically evaluate both the promises and challenges of crowd-checking at scale.
arXiv Detail & Related papers (2025-05-26T14:50:18Z) - Fairness Mediator: Neutralize Stereotype Associations to Mitigate Bias in Large Language Models [66.5536396328527]
LLMs inadvertently absorb spurious correlations from training data, leading to stereotype associations between biased concepts and specific social groups.<n>We propose Fairness Mediator (FairMed), a bias mitigation framework that neutralizes stereotype associations.<n>Our framework comprises two main components: a stereotype association prober and an adversarial debiasing neutralizer.
arXiv Detail & Related papers (2025-04-10T14:23:06Z) - One of Many: Assessing User-level Effects of Moderation Interventions on
r/The_Donald [1.1041211464412573]
We evaluate the user level effects of the sequence of moderation interventions that targeted r/The_Donald on Reddit.
We find that interventions having strong community level effects also cause extreme and diversified user level reactions.
Our results highlight that platform and community level effects are not always representative of the underlying behavior of individuals or smaller user groups.
arXiv Detail & Related papers (2022-09-19T07:46:18Z) - This Must Be the Place: Predicting Engagement of Online Communities in a
Large-scale Distributed Campaign [70.69387048368849]
We study the behavior of communities with millions of active members.
We develop a hybrid model, combining textual cues, community meta-data, and structural properties.
We demonstrate the applicability of our model through Reddit's r/place a large-scale online experiment.
arXiv Detail & Related papers (2022-01-14T08:23:16Z) - What Makes Online Communities 'Better'? Measuring Values, Consensus, and
Conflict across Thousands of Subreddits [13.585903247791094]
We measure community values through the first large-scale survey of community values, including 2,769 reddit users in 2,151 unique subreddits.
We show that community members disagree about how safe their communities are, that longstanding communities place 30.1% more importance on trustworthiness than newer communities.
These findings have important implications, including suggesting that care must be taken to protect vulnerable community members.
arXiv Detail & Related papers (2021-11-10T18:31:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.