The Benefit of Collective Intelligence in Community-Based Content Moderation is Limited by Overt Political Signalling
- URL: http://arxiv.org/abs/2601.22201v1
- Date: Thu, 29 Jan 2026 16:23:50 GMT
- Title: The Benefit of Collective Intelligence in Community-Based Content Moderation is Limited by Overt Political Signalling
- Authors: Gabriela Juncosa, Saeedeh Mohammadi, Margaret Samahita, Taha Yasseri,
- Abstract summary: We show that community-based content moderation systems can allow political biases to influence the development of notes and the rating processes.<n>We conduct an online experiment in which participants jointly authored notes on political posts.<n>We find that politically diverse teams perform better when evaluating Republican posts, while group composition does not affect perceived note quality for Democrat posts.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media platforms face increasing scrutiny over the rapid spread of misinformation. In response, many have adopted community-based content moderation systems, including Community Notes (formerly Birdwatch) on X (formerly Twitter), Footnotes on TikTok, and Facebook's Community Notes initiative. However, research shows that the current design of these systems can allow political biases to influence both the development of notes and the rating processes, reducing their overall effectiveness. We hypothesize that enabling users to collaborate on writing notes, rather than relying solely on individually authored notes, can enhance their overall quality. To test this idea, we conducted an online experiment in which participants jointly authored notes on political posts. Our results show that teams produce notes that are rated as more helpful than individually written notes. We also find that politically diverse teams perform better when evaluating Republican posts, while group composition does not affect perceived note quality for Democrat posts. However, the advantage of collaboration diminishes when team members are aware of one another's political affiliations. Taken together, these findings underscore the complexity of community-based content moderation and highlight the importance of understanding group dynamics and political diversity when designing more effective moderation systems.
Related papers
- Hyperactive Minority Alter the Stability of Community Notes [39.13508775153173]
We study the emergence and visibility of Community Notes on X.<n>We show that contribution activity is highly concentrated.<n>We replicate the notes' emergence process by integrating the open-source implementation of the Community Notes consensus algorithm.
arXiv Detail & Related papers (2026-02-09T18:04:54Z) - Constructing Political Coordinates: Aggregating Over the Opposition for Diverse News Recommendation [1.1787037402510556]
News recommender systems (NRSs) have shown to be useful in minimizing political disengagement and information overload.<n>NRSs often conflate user interest with the partisan bias of the articles in their reading history.<n>Over extended interaction, this can result in the formation of filter bubbles and the polarization of user partisanship.
arXiv Detail & Related papers (2025-11-14T23:04:04Z) - Latent Topic Synthesis: Leveraging LLMs for Electoral Ad Analysis [51.95395936342771]
We introduce an end-to-end framework for automatically generating an interpretable topic taxonomy from an unlabeled corpus.<n>We apply this framework to a large corpus of Meta political ads from the month ahead of the 2024 U.S. Presidential election.<n>Our approach uncovers latent discourse structures, synthesizes semantically rich topic labels, and annotates topics with moral framing dimensions.
arXiv Detail & Related papers (2025-10-16T20:30:20Z) - AI Feedback Enhances Community-Based Content Moderation through Engagement with Counterarguments [0.0]
This study explores an AI-assisted hybrid moderation framework in which participants receive AI-generated feedback on their notes.<n>The results show that incorporating feedback improves the quality of notes, with the most substantial gains resulting from argumentative feedback.<n>The research contributes to ongoing discussions about AI's role in political content moderation.
arXiv Detail & Related papers (2025-07-10T18:52:50Z) - Community Moderation and the New Epistemology of Fact Checking on Social Media [124.26693978503339]
Social media platforms have traditionally relied on independent fact-checking organizations to identify and flag misleading content.<n>X (formerly Twitter) and Meta have shifted towards community-driven content moderation by launching their own versions of crowd-sourced fact-checking.<n>We examine the current approaches to misinformation detection across major platforms, explore the emerging role of community-driven moderation, and critically evaluate both the promises and challenges of crowd-checking at scale.
arXiv Detail & Related papers (2025-05-26T14:50:18Z) - Can Community Notes Replace Professional Fact-Checkers? [49.5332225129956]
Policy changes by Twitter/X and Meta signal a shift away from partnerships with fact-checking organisations.<n>Our analysis reveals that community notes cite fact-checking sources up to five times more than previously reported.<n>Our results show that successful community moderation relies on professional fact-checking and highlight how citizen and professional fact-checking are deeply intertwined.
arXiv Detail & Related papers (2025-02-19T22:26:39Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Political audience diversity and news reliability in algorithmic ranking [54.23273310155137]
We propose using the political diversity of a website's audience as a quality signal.
Using news source reliability ratings from domain experts and web browsing data from a diverse sample of 6,890 U.S. citizens, we first show that websites with more extreme and less politically diverse audiences have lower journalistic standards.
arXiv Detail & Related papers (2020-07-16T02:13:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.