Analyzing Toxicity in Deep Conversations: A Reddit Case Study
- URL: http://arxiv.org/abs/2404.07879v1
- Date: Thu, 11 Apr 2024 16:10:44 GMT
- Title: Analyzing Toxicity in Deep Conversations: A Reddit Case Study
- Authors: Vigneshwaran Shankaran, Rajesh Sharma,
- Abstract summary: This work employs a tree-based approach to understand how users behave concerning toxicity in public conversation settings.
We collect both the posts and the comment sections of the top 100 posts from 8 Reddit communities that allow profanity, totaling over 1 million responses.
We find that toxic comments increase the likelihood of subsequent toxic comments being produced in online conversations.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online social media has become increasingly popular in recent years due to its ease of access and ability to connect with others. One of social media's main draws is its anonymity, allowing users to share their thoughts and opinions without fear of judgment or retribution. This anonymity has also made social media prone to harmful content, which requires moderation to ensure responsible and productive use. Several methods using artificial intelligence have been employed to detect harmful content. However, conversation and contextual analysis of hate speech are still understudied. Most promising works only analyze a single text at a time rather than the conversation supporting it. In this work, we employ a tree-based approach to understand how users behave concerning toxicity in public conversation settings. To this end, we collect both the posts and the comment sections of the top 100 posts from 8 Reddit communities that allow profanity, totaling over 1 million responses. We find that toxic comments increase the likelihood of subsequent toxic comments being produced in online conversations. Our analysis also shows that immediate context plays a vital role in shaping a response rather than the original post. We also study the effect of consensual profanity and observe overlapping similarities with non-consensual profanity in terms of user behavior and patterns.
Related papers
- Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - Classification of social media Toxic comments using Machine learning
models [0.0]
The abstract outlines the problem of toxic comments on social media platforms, where individuals use disrespectful, abusive, and unreasonable language.
This behavior is referred to as anti-social behavior, which occurs during online debates, comments, and fights.
The comments containing explicit language can be classified into various categories, such as toxic, severe toxic, obscene, threat, insult, and identity hate.
To protect users from offensive language, companies have started flagging comments and blocking users.
arXiv Detail & Related papers (2023-04-14T05:40:11Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Understanding the Bystander Effect on Toxic Twitter Conversations [1.1339580074756188]
We examine whether the toxicity of the first direct reply to a toxic tweet in conversations establishes the group norms for subsequent replies.
We analyze a random sample of more than 156k tweets belonging to 9k conversations.
arXiv Detail & Related papers (2022-11-19T18:31:39Z) - Twitter Users' Behavioral Response to Toxic Replies [1.2387676601792899]
We studied the impact of toxicity on users' online behavior on Twitter.
We found that toxicity victims show a combination of the following behavioral reactions: avoidance, revenge, countermeasures, and negotiation.
Our results can assist further studies in developing more effective detection and intervention methods for reducing the negative consequences of toxicity on social media.
arXiv Detail & Related papers (2022-10-24T17:36:58Z) - Beyond Plain Toxic: Detection of Inappropriate Statements on Flammable
Topics for the Russian Language [76.58220021791955]
We present two text collections labelled according to binary notion of inapropriateness and a multinomial notion of sensitive topic.
To objectivise the notion of inappropriateness, we define it in a data-driven way though crowdsourcing.
arXiv Detail & Related papers (2022-03-04T15:59:06Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Online Hate: Behavioural Dynamics and Relationship with Misinformation [0.0]
We perform hate speech detection on a corpus of more than one million comments on YouTube videos.
Our results show that, coherently with Godwin's law, online debates tend to degenerate towards increasingly toxic exchanges of views.
arXiv Detail & Related papers (2021-05-28T17:30:51Z) - Reading In-Between the Lines: An Analysis of Dissenter [2.2881898195409884]
We study Dissenter, a browser and web application that provides a conversational overlay for any web page.
In this work, we obtain a history of Dissenter comments, users, and the websites being discussed.
Our corpus consists of approximately 1.68M comments made by 101k users commenting on 588k distinct URLs.
arXiv Detail & Related papers (2020-09-03T16:25:28Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.