Content Moderation Justice and Fairness on Social Media: Comparisons
Across Different Contexts and Platforms
- URL: http://arxiv.org/abs/2403.06034v1
- Date: Sat, 9 Mar 2024 22:50:06 GMT
- Title: Content Moderation Justice and Fairness on Social Media: Comparisons
Across Different Contexts and Platforms
- Authors: Jie Cai, Aashka Patel, Azadeh Naderi, Donghee Yvette Wohn
- Abstract summary: We conduct an online experiment on 200 American social media users of Reddit and Twitter.
We find that retributive moderation delivers higher justice and fairness for commercially moderated platforms in illegal violations.
We discuss the opportunities for platform policymaking to improve moderation system design.
- Score: 23.735552021636245
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media users may perceive moderation decisions by the platform
differently, which can lead to frustration and dropout. This study investigates
users' perceived justice and fairness of online moderation decisions when they
are exposed to various illegal versus legal scenarios, retributive versus
restorative moderation strategies, and user-moderated versus commercially
moderated platforms. We conduct an online experiment on 200 American social
media users of Reddit and Twitter. Results show that retributive moderation
delivers higher justice and fairness for commercially moderated than for
user-moderated platforms in illegal violations; restorative moderation delivers
higher fairness for legal violations than illegal ones. We discuss the
opportunities for platform policymaking to improve moderation system design.
Related papers
- How Decentralization Affects User Agency on Social Platforms [0.0]
We investigate how decentralization might present promise as an alternative model to walled garden platforms.
We describe the user-driven content moderation through blocks as an expression of agency on Bluesky, a decentralized social platform.
arXiv Detail & Related papers (2024-06-13T12:15:15Z) - Content Moderation on Social Media in the EU: Insights From the DSA
Transparency Database [0.0]
Digital Services Act (DSA) requires large social media platforms in the EU to provide clear and specific information whenever they restrict access to certain content.
Statements of Reasons (SoRs) are collected in the DSA Transparency Database to ensure transparency and scrutiny of content moderation decisions.
We empirically analyze 156 million SoRs within an observation period of two months to provide an early look at content moderation decisions of social media platforms in the EU.
arXiv Detail & Related papers (2023-12-07T16:56:19Z) - Content Moderation and the Formation of Online Communities: A
Theoretical Framework [7.900694093691988]
We study the impact of content moderation policies in online communities.
We first characterize the effectiveness of a natural class of moderation policies for creating and sustaining stable communities.
arXiv Detail & Related papers (2023-10-16T16:49:44Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - A User-Driven Framework for Regulating and Auditing Social Media [94.70018274127231]
We propose that algorithmic filtering should be regulated with respect to a flexible, user-driven baseline.
We require that the feeds a platform filters contain "similar" informational content as their respective baseline feeds.
We present an auditing procedure that checks whether a platform honors this requirement.
arXiv Detail & Related papers (2023-04-20T17:53:34Z) - Spillover of Antisocial Behavior from Fringe Platforms: The Unintended
Consequences of Community Banning [0.2752817022620644]
We show that participating in fringe communities on Reddit increases users' toxicity and involvement with subreddits similar to the banned community.
In short, we find evidence for a spillover of antisocial behavior from fringe platforms onto Reddit via co-participation.
arXiv Detail & Related papers (2022-09-20T15:48:27Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z) - An Iterative Approach for Identifying Complaint Based Tweets in Social
Media Platforms [76.9570531352697]
We propose an iterative methodology which aims to identify complaint based posts pertaining to the transport domain.
We perform comprehensive evaluations along with releasing a novel dataset for the research purposes.
arXiv Detail & Related papers (2020-01-24T22:23:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.