Personalized Content Moderation and Emergent Outcomes
- URL: http://arxiv.org/abs/2405.09640v1
- Date: Wed, 15 May 2024 18:07:36 GMT
- Title: Personalized Content Moderation and Emergent Outcomes
- Authors: Necdet Gurkan, Mohammed Almarzouq, Pon Rahul Murugaraj,
- Abstract summary: Social media platforms have implemented automated content moderation tools to preserve community norms and mitigate online hate and harassment.
Recently, these platforms have started to offer Personalized Content Moderation (PCM), granting users control over moderation settings or aligning algorithms with individual user preferences.
Our study reveals that PCM leads to asymmetric information loss (AIL), potentially impeding the development of a shared understanding among users.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media platforms have implemented automated content moderation tools to preserve community norms and mitigate online hate and harassment. Recently, these platforms have started to offer Personalized Content Moderation (PCM), granting users control over moderation settings or aligning algorithms with individual user preferences. While PCM addresses the limitations of the one-size-fits-all approach and enhances user experiences, it may also impact emergent outcomes on social media platforms. Our study reveals that PCM leads to asymmetric information loss (AIL), potentially impeding the development of a shared understanding among users, crucial for healthy community dynamics. We further demonstrate that PCM tools could foster the creation of echo chambers and filter bubbles, resulting in increased community polarization. Our research is the first to identify AIL as a consequence of PCM and to highlight its potential negative impacts on online communities.
Related papers
- Multi-Platform Aggregated Dataset of Online Communities (MADOC) [64.45797970830233]
MADOC aggregates and standardizes data from Bluesky, Koo, Reddit, and Voat (2012-2024), containing 18.9 million posts, 236 million comments, and 23.1 million unique users.
The dataset enables comparative studies of toxic behavior evolution across platforms through standardized interaction records and sentiment analysis.
arXiv Detail & Related papers (2025-01-22T14:02:11Z) - Characterizing the Fragmentation of the Social Media Ecosystem [39.58317527488534]
We use a dataset of 126M URLs posted by nearly 6M users on nine social media platforms.
We find a clear separation between mainstream and alt-tech platforms.
These findings outline the main dimensions defining the fragmentation and polarization of the social media ecosystem.
arXiv Detail & Related papers (2024-11-25T18:45:03Z) - Persuasion with Large Language Models: a Survey [49.86930318312291]
Large Language Models (LLMs) have created new disruptive possibilities for persuasive communication.
In areas such as politics, marketing, public health, e-commerce, and charitable giving, such LLM Systems have already achieved human-level or even super-human persuasiveness.
Our survey suggests that the current and future potential of LLM-based persuasion poses profound ethical and societal risks.
arXiv Detail & Related papers (2024-11-11T10:05:52Z) - Unveiling User Satisfaction and Creator Productivity Trade-Offs in Recommendation Platforms [68.51708490104687]
We show that a purely relevance-driven policy with low exploration strength boosts short-term user satisfaction but undermines the long-term richness of the content pool.
Our findings reveal a fundamental trade-off between immediate user satisfaction and overall content production on platforms.
arXiv Detail & Related papers (2024-10-31T07:19:22Z) - MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Personal Moderation Configurations on Facebook: Exploring the Role of
FoMO, Social Media Addiction, Norms, and Platform Trust [1.7223564681760166]
Fear of missing out (FoMO) and social media addiction make Facebook users more vulnerable to content-based harms.
Trust in Facebook's moderation systems also significantly affects users' engagement with personal moderation.
arXiv Detail & Related papers (2024-01-11T00:28:57Z) - Content Moderation and the Formation of Online Communities: A
Theoretical Framework [7.900694093691988]
We study the impact of content moderation policies in online communities.
We first characterize the effectiveness of a natural class of moderation policies for creating and sustaining stable communities.
arXiv Detail & Related papers (2023-10-16T16:49:44Z) - The Impact of Recommendation Systems on Opinion Dynamics: Microscopic
versus Macroscopic Effects [1.4180331276028664]
We study the impact of recommendation systems on users, both from a microscopic (i.e., at the level of individual users) and a macroscopic perspective.
Our analysis reveals that shifts in the opinions of individual users do not always align with shifts in the opinion distribution of the population.
arXiv Detail & Related papers (2023-09-16T11:44:51Z) - Understanding Online Migration Decisions Following the Banning of
Radical Communities [0.2752817022620644]
We study how factors associated with the RECRO radicalization framework relate to users' migration decisions.
Our results show that individual-level factors, those relating to the behavior of users, are associated with the decision to post on the fringe platform.
arXiv Detail & Related papers (2022-12-09T10:43:15Z) - Personality-Driven Social Multimedia Content Recommendation [68.46899477180837]
We investigate the impact of human personality traits on the content recommendation model by applying a novel personality-driven multi-view content recommender system.
Our experimental results and real-world case study demonstrate not just PersiC's ability to perform efficient human personality-driven multi-view content recommendation, but also allow for actionable digital ad strategy recommendations.
arXiv Detail & Related papers (2022-07-25T14:37:18Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.