Personalized Content Moderation and Emergent Outcomes
- URL: http://arxiv.org/abs/2405.09640v1
- Date: Wed, 15 May 2024 18:07:36 GMT
- Title: Personalized Content Moderation and Emergent Outcomes
- Authors: Necdet Gurkan, Mohammed Almarzouq, Pon Rahul Murugaraj,
- Abstract summary: Social media platforms have implemented automated content moderation tools to preserve community norms and mitigate online hate and harassment.
Recently, these platforms have started to offer Personalized Content Moderation (PCM), granting users control over moderation settings or aligning algorithms with individual user preferences.
Our study reveals that PCM leads to asymmetric information loss (AIL), potentially impeding the development of a shared understanding among users.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media platforms have implemented automated content moderation tools to preserve community norms and mitigate online hate and harassment. Recently, these platforms have started to offer Personalized Content Moderation (PCM), granting users control over moderation settings or aligning algorithms with individual user preferences. While PCM addresses the limitations of the one-size-fits-all approach and enhances user experiences, it may also impact emergent outcomes on social media platforms. Our study reveals that PCM leads to asymmetric information loss (AIL), potentially impeding the development of a shared understanding among users, crucial for healthy community dynamics. We further demonstrate that PCM tools could foster the creation of echo chambers and filter bubbles, resulting in increased community polarization. Our research is the first to identify AIL as a consequence of PCM and to highlight its potential negative impacts on online communities.
Related papers
- MisinfoEval: Generative AI in the Era of "Alternative Facts" [50.069577397751175]
We introduce a framework for generating and evaluating large language model (LLM) based misinformation interventions.
We present (1) an experiment with a simulated social media environment to measure effectiveness of misinformation interventions, and (2) a second experiment with personalized explanations tailored to the demographics and beliefs of users.
Our findings confirm that LLM-based interventions are highly effective at correcting user behavior.
arXiv Detail & Related papers (2024-10-13T18:16:50Z) - Personal Moderation Configurations on Facebook: Exploring the Role of
FoMO, Social Media Addiction, Norms, and Platform Trust [1.7223564681760166]
Fear of missing out (FoMO) and social media addiction make Facebook users more vulnerable to content-based harms.
Trust in Facebook's moderation systems also significantly affects users' engagement with personal moderation.
arXiv Detail & Related papers (2024-01-11T00:28:57Z) - Content Moderation and the Formation of Online Communities: A
Theoretical Framework [7.900694093691988]
We study the impact of content moderation policies in online communities.
We first characterize the effectiveness of a natural class of moderation policies for creating and sustaining stable communities.
arXiv Detail & Related papers (2023-10-16T16:49:44Z) - The Impact of Recommendation Systems on Opinion Dynamics: Microscopic
versus Macroscopic Effects [1.4180331276028664]
We study the impact of recommendation systems on users, both from a microscopic (i.e., at the level of individual users) and a macroscopic perspective.
Our analysis reveals that shifts in the opinions of individual users do not always align with shifts in the opinion distribution of the population.
arXiv Detail & Related papers (2023-09-16T11:44:51Z) - Understanding Online Migration Decisions Following the Banning of
Radical Communities [0.2752817022620644]
We study how factors associated with the RECRO radicalization framework relate to users' migration decisions.
Our results show that individual-level factors, those relating to the behavior of users, are associated with the decision to post on the fringe platform.
arXiv Detail & Related papers (2022-12-09T10:43:15Z) - Personality-Driven Social Multimedia Content Recommendation [68.46899477180837]
We investigate the impact of human personality traits on the content recommendation model by applying a novel personality-driven multi-view content recommender system.
Our experimental results and real-world case study demonstrate not just PersiC's ability to perform efficient human personality-driven multi-view content recommendation, but also allow for actionable digital ad strategy recommendations.
arXiv Detail & Related papers (2022-07-25T14:37:18Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.