Safe Spaces or Toxic Places? Content Moderation and Social Dynamics of Online Eating Disorder Communities
- URL: http://arxiv.org/abs/2412.15721v1
- Date: Fri, 20 Dec 2024 09:42:54 GMT
- Title: Safe Spaces or Toxic Places? Content Moderation and Social Dynamics of Online Eating Disorder Communities
- Authors: Kristina Lerman, Minh Duc Chu, Charles Bickham, Luca Luceri, Emilio Ferrara,
- Abstract summary: Social media platforms have become critical spaces for discussing mental health concerns, including eating disorders.
This study addresses this knowledge gap through a comparative analysis of eating disorder discussions across Twitter/X, Reddit, and TikTok.
Our findings reveal that while users across all platforms engage similarly in expressing concerns and seeking support, platforms with weaker moderation (like Twitter/X) enable the formation of toxic echo chambers that amplify pro-anorexia rhetoric.
- Score: 8.950110714892498
- License:
- Abstract: Social media platforms have become critical spaces for discussing mental health concerns, including eating disorders. While these platforms can provide valuable support networks, they may also amplify harmful content that glorifies disordered cognition and self-destructive behaviors. While social media platforms have implemented various content moderation strategies, from stringent to laissez-faire approaches, we lack a comprehensive understanding of how these different moderation practices interact with user engagement in online communities around these sensitive mental health topics. This study addresses this knowledge gap through a comparative analysis of eating disorder discussions across Twitter/X, Reddit, and TikTok. Our findings reveal that while users across all platforms engage similarly in expressing concerns and seeking support, platforms with weaker moderation (like Twitter/X) enable the formation of toxic echo chambers that amplify pro-anorexia rhetoric. These results demonstrate how moderation strategies significantly influence the development and impact of online communities, particularly in contexts involving mental health and self-harm.
Related papers
- Characterizing Online Toxicity During the 2022 Mpox Outbreak: A Computational Analysis of Topical and Network Dynamics [0.9831489366502301]
The 2022 Mpox outbreak, initially termed "Monkeypox" but subsequently renamed to mitigate associated stigmas and societal concerns, serves as a poignant backdrop to this issue.
We collected more than 1.6 million unique tweets and analyzed them from five dimensions, including context, extent, content, speaker, and intent.
We identified five high-level topic categories in the toxic online discourse on Twitter, including disease (46.6%), health policy and healthcare (19.3%), homophobia (23.9%), politics.
We found that retweets of toxic content were widespread, while influential users rarely engaged with or countered this toxicity through retweets.
arXiv Detail & Related papers (2024-08-21T19:31:01Z) - Who can help me? Reconstructing users' psychological journeys in
depression-related social media interactions [0.13194391758295113]
We investigate several popular mental health-related Reddit boards about depression.
We reconstruct users' psychological/linguistic profiles together with their social interactions.
Our approach opens the way to data-informed understandings of psychological coping with mental health issues through social media.
arXiv Detail & Related papers (2023-11-29T14:45:11Z) - Mental Illness Classification on Social Media Texts using Deep Learning
and Transfer Learning [55.653944436488786]
According to the World health organization (WHO), approximately 450 million people are affected.
Mental illnesses, such as depression, anxiety, bipolar disorder, ADHD, and PTSD.
This study analyzes unstructured user data on Reddit platform and classifies five common mental illnesses: depression, anxiety, bipolar disorder, ADHD, and PTSD.
arXiv Detail & Related papers (2022-07-03T11:33:52Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z) - Assessing the Severity of Health States based on Social Media Posts [62.52087340582502]
We propose a multiview learning framework that models both the textual content as well as contextual-information to assess the severity of the user's health state.
The diverse NLU views demonstrate its effectiveness on both the tasks and as well as on the individual disease to assess a user's health.
arXiv Detail & Related papers (2020-09-21T03:45:14Z) - The Effect of Moderation on Online Mental Health Conversations [17.839146423209474]
The presence of a moderator increased user engagement, encouraged users to discuss negative emotions more candidly, and dramatically reduced bad behavior among chat participants.
Our findings suggest that moderation can serve as a valuable tool to improve the efficacy and safety of online mental health conversations.
arXiv Detail & Related papers (2020-05-19T05:40:59Z) - Predicting User Emotional Tone in Mental Disorder Online Communities [2.365702128814616]
We analyze how discussions in Reddit communities related to mental disorders can help improve the health conditions of their users.
Using the emotional tone of users' writing as a proxy for emotional state, we uncover relationships between user interactions and state changes.
We build models based on SOTA text embedding techniques and RNNs to predict shifts in emotional tone.
arXiv Detail & Related papers (2020-05-15T11:25:08Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.