Community Moderation and the New Epistemology of Fact Checking on Social Media
- URL: http://arxiv.org/abs/2505.20067v1
- Date: Mon, 26 May 2025 14:50:18 GMT
- Title: Community Moderation and the New Epistemology of Fact Checking on Social Media
- Authors: Isabelle Augenstein, Michiel Bakker, Tanmoy Chakraborty, David Corney, Emilio Ferrara, Iryna Gurevych, Scott Hale, Eduard Hovy, Heng Ji, Irene Larraz, Filippo Menczer, Preslav Nakov, Paolo Papotti, Dhruv Sahnan, Greta Warren, Giovanni Zagni,
- Abstract summary: Social media platforms have traditionally relied on independent fact-checking organizations to identify and flag misleading content.<n>X (formerly Twitter) and Meta have shifted towards community-driven content moderation by launching their own versions of crowd-sourced fact-checking.<n>We examine the current approaches to misinformation detection across major platforms, explore the emerging role of community-driven moderation, and critically evaluate both the promises and challenges of crowd-checking at scale.
- Score: 124.26693978503339
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media platforms have traditionally relied on internal moderation teams and partnerships with independent fact-checking organizations to identify and flag misleading content. Recently, however, platforms including X (formerly Twitter) and Meta have shifted towards community-driven content moderation by launching their own versions of crowd-sourced fact-checking -- Community Notes. If effectively scaled and governed, such crowd-checking initiatives have the potential to combat misinformation with increased scale and speed as successfully as community-driven efforts once did with spam. Nevertheless, general content moderation, especially for misinformation, is inherently more complex. Public perceptions of truth are often shaped by personal biases, political leanings, and cultural contexts, complicating consensus on what constitutes misleading content. This suggests that community efforts, while valuable, cannot replace the indispensable role of professional fact-checkers. Here we systemically examine the current approaches to misinformation detection across major platforms, explore the emerging role of community-driven moderation, and critically evaluate both the promises and challenges of crowd-checking at scale.
Related papers
- Dynamics of collective minds in online communities [1.747623282473278]
We show how collective minds in online news communities can be influenced by different editorial agenda-setting practices and aspects of community dynamics.<n>We develop a computational model of collective minds, calibrated and validated with data from 400 million comments across five U.S. online news platforms and a large-scale survey.
arXiv Detail & Related papers (2025-04-10T22:22:40Z) - Can Community Notes Replace Professional Fact-Checkers? [49.5332225129956]
Policy changes by Twitter/X and Meta signal a shift away from partnerships with fact-checking organisations.<n>Our analysis reveals that community notes cite fact-checking sources up to five times more than previously reported.
arXiv Detail & Related papers (2025-02-19T22:26:39Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - What Makes Online Communities 'Better'? Measuring Values, Consensus, and
Conflict across Thousands of Subreddits [13.585903247791094]
We measure community values through the first large-scale survey of community values, including 2,769 reddit users in 2,151 unique subreddits.
We show that community members disagree about how safe their communities are, that longstanding communities place 30.1% more importance on trustworthiness than newer communities.
These findings have important implications, including suggesting that care must be taken to protect vulnerable community members.
arXiv Detail & Related papers (2021-11-10T18:31:22Z) - The Impact of Disinformation on a Controversial Debate on Social Media [1.299941371793082]
We study how pervasive is the presence of disinformation in the Italian debate around immigration on Twitter.
By characterising the Twitter users with an textitUntrustworthiness score, we are able to see that such bad information consumption habits are not equally distributed across the users.
arXiv Detail & Related papers (2021-06-30T10:29:07Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - An Agenda for Disinformation Research [3.083055913556838]
Disinformation erodes trust in the socio-political institutions that are the fundamental fabric of democracy.
The distribution of false, misleading, or inaccurate information with the intent to deceive is an existential threat to the United States.
New tools and approaches must be developed to leverage these affordances to understand and address this growing challenge.
arXiv Detail & Related papers (2020-12-15T19:32:36Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.