The Toxicity Phenomenon Across Social Media
- URL: http://arxiv.org/abs/2410.21589v1
- Date: Mon, 28 Oct 2024 22:47:41 GMT
- Title: The Toxicity Phenomenon Across Social Media
- Authors: Rhett Hanscom, Tamara Silbergleit Lehman, Qin Lv, Shivakant Mishra,
- Abstract summary: Social media platforms have evolved rapidly in modernity without strong regulation.
One clear obstacle faced by current users is that of toxicity.
We describe literature surrounding toxicity, formalize a definition of toxicity, propose a novel cycle of internet extremism.
- Score: 1.1990627393863034
- License:
- Abstract: Social media platforms have evolved rapidly in modernity without strong regulation. One clear obstacle faced by current users is that of toxicity. Toxicity on social media manifests through a number of forms, including harassment, negativity, misinformation or other means of divisiveness. In this paper, we characterize literature surrounding toxicity, formalize a definition of toxicity, propose a novel cycle of internet extremism, list current approaches to toxicity detection, outline future directions to minimize toxicity in future social media endeavors, and identify current gaps in research space. We present a novel perspective of the negative impacts of social media platforms and fill a gap in literature to help improve the future of social media platforms.
Related papers
- Community Shaping in the Digital Age: A Temporal Fusion Framework for Analyzing Discourse Fragmentation in Online Social Networks [45.58331196717468]
This research presents a framework for analyzing the dynamics of online communities in social media platforms.
By combining text classification and dynamic social network analysis, we uncover mechanisms driving community formation and evolution.
arXiv Detail & Related papers (2024-09-18T03:03:02Z) - Characterizing Online Toxicity During the 2022 Mpox Outbreak: A Computational Analysis of Topical and Network Dynamics [0.9831489366502301]
The 2022 Mpox outbreak, initially termed "Monkeypox" but subsequently renamed to mitigate associated stigmas and societal concerns, serves as a poignant backdrop to this issue.
We collected more than 1.6 million unique tweets and analyzed them from five dimensions, including context, extent, content, speaker, and intent.
We identified five high-level topic categories in the toxic online discourse on Twitter, including disease (46.6%), health policy and healthcare (19.3%), homophobia (23.9%), politics.
We found that retweets of toxic content were widespread, while influential users rarely engaged with or countered this toxicity through retweets.
arXiv Detail & Related papers (2024-08-21T19:31:01Z) - ToxicChat: Unveiling Hidden Challenges of Toxicity Detection in
Real-World User-AI Conversation [43.356758428820626]
We introduce ToxicChat, a novel benchmark based on real user queries from an open-source chatbots.
Our systematic evaluation of models trained on existing toxicity datasets has shown their shortcomings when applied to this unique domain of ToxicChat.
In the future, ToxicChat can be a valuable resource to drive further advancements toward building a safe and healthy environment for user-AI interactions.
arXiv Detail & Related papers (2023-10-26T13:35:41Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Analysis of Online Toxicity Detection Using Machine Learning Approaches [6.548580592686076]
Social media and the internet have become an integral part of how people spread and consume information.
Almost half of the population is using social media to express their views and opinions.
Online hate speech is one of the drawbacks of social media nowadays, which needs to be controlled.
arXiv Detail & Related papers (2021-04-23T04:29:13Z) - Analysing Social Media Network Data with R: Semi-Automated Screening of
Users, Comments and Communication Patterns [0.0]
Communication on social media platforms is increasingly widespread across societies.
Fake news, hate speech and radicalizing elements are part of this modern form of communication.
A basic understanding of these mechanisms and communication patterns could help to counteract negative forms of communication.
arXiv Detail & Related papers (2020-11-26T14:52:01Z) - Exposure to Social Engagement Metrics Increases Vulnerability to
Misinformation [12.737240668157424]
We find that exposure to social engagement signals increases the vulnerability of users to misinformation.
To reduce the spread of misinformation, we call for technology platforms to rethink the display of social engagement metrics.
arXiv Detail & Related papers (2020-05-10T14:55:50Z) - Leveraging Multi-Source Weak Social Supervision for Early Detection of
Fake News [67.53424807783414]
Social media has greatly enabled people to participate in online activities at an unprecedented rate.
This unrestricted access also exacerbates the spread of misinformation and fake news online which might cause confusion and chaos unless being detected early for its mitigation.
We jointly leverage the limited amount of clean data along with weak signals from social engagements to train deep neural networks in a meta-learning framework to estimate the quality of different weak instances.
Experiments on realworld datasets demonstrate that the proposed framework outperforms state-of-the-art baselines for early detection of fake news without using any user engagements at prediction time.
arXiv Detail & Related papers (2020-04-03T18:26:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.