Critical Challenges in Content Moderation for People Who Use Drugs (PWUD): Insights into Online Harm Reduction Practices from Moderators
- URL: http://arxiv.org/abs/2508.02868v1
- Date: Mon, 04 Aug 2025 19:54:44 GMT
- Title: Critical Challenges in Content Moderation for People Who Use Drugs (PWUD): Insights into Online Harm Reduction Practices from Moderators
- Authors: Kaixuan Wang, Loraine Clarke, Carl-Cyril J Dreue, Guancheng Zhou, Jason T. Jacques,
- Abstract summary: This work constitutes a distinct form of public health intervention characterised by three moderation challenges.<n>We argue that this work constitutes a distinct form of public health intervention characterised by three moderation challenges.
- Score: 10.096128529206377
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Online communities serve as essential support channels for People Who Use Drugs (PWUD), providing access to peer support and harm reduction information. The moderation of these communities involves consequential decisions affecting member safety, yet existing sociotechnical systems provide insufficient support for moderators. Through interviews with experienced moderators from PWUD forums on Reddit, we analyse the unique nature of this work. We argue that this work constitutes a distinct form of public health intervention characterised by three moderation challenges: the need for specialised, expert risk assessment; time-critical crisis response; and the navigation of a structural conflict between platform policies and community safety goals. We demonstrate how current moderation systems are insufficient in supporting PWUD communities. For example, policies minimising platforms' legal exposure to illicit activities can inadvertently push moderators to implement restrictive rules to protect community's existence, which can limit such a vulnerable group's ability to share potentially life-saving resources online. We conclude by identifying two necessary shifts in sociotechnical design to support moderators' work: first, moving to automated tools that support human sensemaking in contexts with competing interests; and second, shifting from systems that require moderators to perform low-level rule programming to those that enable high-level, example-based instruction. Further, we highlight how the design of sociotechnical systems in online spaces could impact harm reduction efforts aimed at improving health outcomes for PWUD communities.
Related papers
- Community Moderation and the New Epistemology of Fact Checking on Social Media [124.26693978503339]
Social media platforms have traditionally relied on independent fact-checking organizations to identify and flag misleading content.<n>X (formerly Twitter) and Meta have shifted towards community-driven content moderation by launching their own versions of crowd-sourced fact-checking.<n>We examine the current approaches to misinformation detection across major platforms, explore the emerging role of community-driven moderation, and critically evaluate both the promises and challenges of crowd-checking at scale.
arXiv Detail & Related papers (2025-05-26T14:50:18Z) - 2FA: Navigating the Challenges and Solutions for Inclusive Access [55.2480439325792]
Two-Factor Authentication (2FA) has emerged as a critical solution to protect online activities.<n>This paper examines the intricacies of deploying 2FA in a way that is secure and accessible to all users.<n>An analysis was conducted to examine the implementation and availability of various 2FA methods across popular online platforms.
arXiv Detail & Related papers (2025-02-17T12:23:53Z) - Demarked: A Strategy for Enhanced Abusive Speech Moderation through Counterspeech, Detoxification, and Message Management [71.99446449877038]
We propose a more comprehensive approach called Demarcation scoring abusive speech based on four aspect -- (i) severity scale; (ii) presence of a target; (iii) context scale; (iv) legal scale.
Our work aims to inform future strategies for effectively addressing abusive speech online.
arXiv Detail & Related papers (2024-06-27T21:45:33Z) - Content Moderation and the Formation of Online Communities: A
Theoretical Framework [7.900694093691988]
We study the impact of content moderation policies in online communities.
We first characterize the effectiveness of a natural class of moderation policies for creating and sustaining stable communities.
arXiv Detail & Related papers (2023-10-16T16:49:44Z) - Nip it in the Bud: Moderation Strategies in Open Source Software
Projects and the Role of Bots [17.02726827353919]
This study examines the various structures and norms that support community moderation in open source software projects.
We interviewed 14 practitioners to uncover existing moderation practices and ways that automation can provide assistance.
Our main contributions include a characterization of moderated content in OSS projects, moderation techniques, as well as perceptions of and recommendations for improving the automation of moderation tasks.
arXiv Detail & Related papers (2023-08-14T19:42:51Z) - Proactive Moderation of Online Discussions: Existing Practices and the
Potential for Algorithmic Support [12.515485963557426]
reactive paradigm of taking action against already-posted antisocial content is currently the most common form of moderation.
We explore how automation could assist with this existing proactive moderation workflow by building a prototype tool.
arXiv Detail & Related papers (2022-11-29T19:00:02Z) - Having your Privacy Cake and Eating it Too: Platform-supported Auditing
of Social Media Algorithms for Public Interest [70.02478301291264]
Social media platforms curate access to information and opportunities, and so play a critical role in shaping public discourse.
Prior studies have used black-box methods to show that these algorithms can lead to biased or discriminatory outcomes.
We propose a new method for platform-supported auditing that can meet the goals of the proposed legislation.
arXiv Detail & Related papers (2022-07-18T17:32:35Z) - Governing for Free: Rule Process Effects on Reddit Moderator Motivations [0.0]
The over 2.8 million "subreddit" communities on Reddit are governed by hundreds of thousands of volunteer moderators, many of whom have no training or prior experience in a governing role.
While moderators often devote daily time to community maintenance and cope with the emotional effects of hate comments or disturbing content, Reddit provides no compensation for this position.
We investigate how the processes through which subreddit moderators generate community rules increase moderators' motivation through the meeting of social-psychological needs: Procedural Justice and Self Determination, and Self-Other Merging.
arXiv Detail & Related papers (2022-06-11T23:23:38Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z) - Moderation Challenges in Voice-based Online Communities on Discord [24.417653462255448]
Findings suggest that the affordances of voice-based online communities change what it means to moderate content and interactions.
New ways to break rules that moderators of text-based communities find unfamiliar, such as disruptive noise and voice raiding.
New moderation strategies are limited and often based on hearsay and first impressions, resulting in problems ranging from unsuccessful moderation to false accusations.
arXiv Detail & Related papers (2021-01-13T18:43:22Z) - CASS: Towards Building a Social-Support Chatbot for Online Health
Community [67.45813419121603]
The CASS architecture is based on advanced neural network algorithms.
It can handle new inputs from users and generate a variety of responses to them.
With a follow-up field experiment, CASS is proven useful in supporting individual members who seek emotional support.
arXiv Detail & Related papers (2021-01-04T05:52:03Z) - Quantifying the Vulnerabilities of the Online Public Square to Adversarial Manipulation Tactics [43.98568073610101]
We use a social media model to quantify the impacts of several adversarial manipulation tactics on the quality of content.
We find that the presence of influential accounts, a hallmark of social media, exacerbates the vulnerabilities of online communities to manipulation.
These insights suggest countermeasures that platforms could employ to increase the resilience of social media users to manipulation.
arXiv Detail & Related papers (2019-07-13T21:12:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.