SoK: Content Moderation in Social Media, from Guidelines to Enforcement,
and Research to Practice
- URL: http://arxiv.org/abs/2206.14855v2
- Date: Thu, 27 Oct 2022 23:26:05 GMT
- Title: SoK: Content Moderation in Social Media, from Guidelines to Enforcement,
and Research to Practice
- Authors: Mohit Singhal and Chen Ling and Pujan Paudel and Poojitha Thota and
Nihal Kumarswamy and Gianluca Stringhini and Shirin Nilizadeh
- Abstract summary: We study the 14 most popular social media content moderation guidelines and practices in the US.
We identify the differences between the content moderation employed in mainstream social media platforms compared to fringe platforms.
We highlight why platforms should shift from a one-size-fits-all model to a more inclusive model.
- Score: 9.356143195807064
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: To counter online abuse and misinformation, social media platforms have been
establishing content moderation guidelines and employing various moderation
policies. The goal of this paper is to study these community guidelines and
moderation practices, as well as the relevant research publications to identify
the research gaps, differences in moderation techniques, and challenges that
should be tackled by the social media platforms and the research community at
large. In this regard, we study and analyze in the US jurisdiction the fourteen
most popular social media content moderation guidelines and practices, and
consolidate them. We then introduce three taxonomies drawn from this analysis
as well as covering over one hundred interdisciplinary research papers about
moderation strategies. We identified the differences between the content
moderation employed in mainstream social media platforms compared to fringe
platforms. We also highlight the implications of Section 230, the need for
transparency and opacity in content moderation, why platforms should shift from
a one-size-fits-all model to a more inclusive model, and lastly, we highlight
why there is a need for a collaborative human-AI system.
Related papers
- The Unappreciated Role of Intent in Algorithmic Moderation of Social Media Content [2.2618341648062477]
This paper examines the role of intent in content moderation systems.
We review state of the art detection models and benchmark training datasets for online abuse to assess their awareness and ability to capture intent.
arXiv Detail & Related papers (2024-05-17T18:05:13Z) - Recent Advances in Hate Speech Moderation: Multimodality and the Role of Large Models [52.24001776263608]
This comprehensive survey delves into the recent strides in HS moderation.
We highlight the burgeoning role of large language models (LLMs) and large multimodal models (LMMs)
We identify existing gaps in research, particularly in the context of underrepresented languages and cultures.
arXiv Detail & Related papers (2024-01-30T03:51:44Z) - Content Moderation on Social Media in the EU: Insights From the DSA
Transparency Database [0.0]
Digital Services Act (DSA) requires large social media platforms in the EU to provide clear and specific information whenever they restrict access to certain content.
Statements of Reasons (SoRs) are collected in the DSA Transparency Database to ensure transparency and scrutiny of content moderation decisions.
We empirically analyze 156 million SoRs within an observation period of two months to provide an early look at content moderation decisions of social media platforms in the EU.
arXiv Detail & Related papers (2023-12-07T16:56:19Z) - Content Moderation and the Formation of Online Communities: A
Theoretical Framework [7.900694093691988]
We study the impact of content moderation policies in online communities.
We first characterize the effectiveness of a natural class of moderation policies for creating and sustaining stable communities.
arXiv Detail & Related papers (2023-10-16T16:49:44Z) - User Attitudes to Content Moderation in Web Search [49.1574468325115]
We examine the levels of support for different moderation practices applied to potentially misleading and/or potentially offensive content in web search.
We find that the most supported practice is informing users about potentially misleading or offensive content, and the least supported one is the complete removal of search results.
More conservative users and users with lower levels of trust in web search results are more likely to be against content moderation in web search.
arXiv Detail & Related papers (2023-10-05T10:57:15Z) - Understanding Divergent Framing of the Supreme Court Controversies:
Social Media vs. News Outlets [56.67097829383139]
We focus on the nuanced distinctions in framing of social media and traditional media outlets concerning a series of U.S. Supreme Court rulings.
We observe significant polarization in the news media's treatment of affirmative action and abortion rights, whereas the topic of student loans tends to exhibit a greater degree of consensus.
arXiv Detail & Related papers (2023-09-18T06:40:21Z) - Aggression and "hate speech" in communication of media users: analysis
of control capabilities [50.591267188664666]
Authors studied the possibilities of mutual influence of users in new media.
They found a high level of aggression and hate speech when discussing an urgent social problem - measures for COVID-19 fighting.
Results can be useful for developing media content in a modern digital environment.
arXiv Detail & Related papers (2022-08-25T15:53:32Z) - A Trade-off-centered Framework of Content Moderation [25.068722325387515]
We find that content moderation can be characterized as a series of trade-offs around moderation actions, styles, philosophies, and values.
We argue that trade-offs should be of central importance in investigating and designing content moderation.
arXiv Detail & Related papers (2022-06-07T17:10:49Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.