ALONE: A Dataset for Toxic Behavior among Adolescents on Twitter
- URL: http://arxiv.org/abs/2008.06465v1
- Date: Fri, 14 Aug 2020 17:02:55 GMT
- Title: ALONE: A Dataset for Toxic Behavior among Adolescents on Twitter
- Authors: Thilini Wijesiriwardene, Hale Inan, Ugur Kursuncu, Manas Gaur, Valerie
L. Shalin, Krishnaprasad Thirunarayan, Amit Sheth, I. Budak Arpinar
- Abstract summary: This paper provides a dataset of toxic social media interactions between confirmed high school students, called ALONE (AdoLescents ON twittEr)
Nearly 66% of internet users have observed online harassment, and 41% claim personal experience, with 18% facing severe forms of online harassment.
Our observations show that individual tweets do not provide sufficient evidence for toxic behavior, and meaningful use of context in interactions can enable highlighting or exonerating tweets with purported toxicity.
- Score: 5.723363140737726
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The convenience of social media has also enabled its misuse, potentially
resulting in toxic behavior. Nearly 66% of internet users have observed online
harassment, and 41% claim personal experience, with 18% facing severe forms of
online harassment. This toxic communication has a significant impact on the
well-being of young individuals, affecting mental health and, in some cases,
resulting in suicide. These communications exhibit complex linguistic and
contextual characteristics, making recognition of such narratives challenging.
In this paper, we provide a multimodal dataset of toxic social media
interactions between confirmed high school students, called ALONE (AdoLescents
ON twittEr), along with descriptive explanation. Each instance of interaction
includes tweets, images, emoji and related metadata. Our observations show that
individual tweets do not provide sufficient evidence for toxic behavior, and
meaningful use of context in interactions can enable highlighting or
exonerating tweets with purported toxicity.
Related papers
- Characterizing Online Toxicity During the 2022 Mpox Outbreak: A Computational Analysis of Topical and Network Dynamics [0.9831489366502301]
The 2022 Mpox outbreak, initially termed "Monkeypox" but subsequently renamed to mitigate associated stigmas and societal concerns, serves as a poignant backdrop to this issue.
We collected more than 1.6 million unique tweets and analyzed them from five dimensions, including context, extent, content, speaker, and intent.
We identified five high-level topic categories in the toxic online discourse on Twitter, including disease (46.6%), health policy and healthcare (19.3%), homophobia (23.9%), politics.
We found that retweets of toxic content were widespread, while influential users rarely engaged with or countered this toxicity through retweets.
arXiv Detail & Related papers (2024-08-21T19:31:01Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Understanding the Bystander Effect on Toxic Twitter Conversations [1.1339580074756188]
We examine whether the toxicity of the first direct reply to a toxic tweet in conversations establishes the group norms for subsequent replies.
We analyze a random sample of more than 156k tweets belonging to 9k conversations.
arXiv Detail & Related papers (2022-11-19T18:31:39Z) - User Engagement and the Toxicity of Tweets [1.1339580074756188]
We analyze a random sample of more than 85,300 Twitter conversations to examine differences between toxic and non-toxic conversations.
We find that toxic conversations, those with at least one toxic tweet, are longer but have fewer individual users contributing to the dialogue compared to the non-toxic conversations.
We also find a relationship between the toxicity of the first reply to a toxic tweet and the toxicity of the conversation.
arXiv Detail & Related papers (2022-11-07T20:55:22Z) - Twitter Users' Behavioral Response to Toxic Replies [1.2387676601792899]
We studied the impact of toxicity on users' online behavior on Twitter.
We found that toxicity victims show a combination of the following behavioral reactions: avoidance, revenge, countermeasures, and negotiation.
Our results can assist further studies in developing more effective detection and intervention methods for reducing the negative consequences of toxicity on social media.
arXiv Detail & Related papers (2022-10-24T17:36:58Z) - A deep dive into the consistently toxic 1% of Twitter [9.669275987983447]
This study spans 14 years of tweets from 122K Twitter profiles and more than 293M tweets.
We selected the most extreme profiles in terms of consistency of toxic content and examined their tweet texts, and the domains, hashtags, and URLs they shared.
We found that these selected profiles keep to a narrow theme with lower diversity in hashtags, URLs, and domains, they are thematically similar to each other, and have a high likelihood of bot-like behavior.
arXiv Detail & Related papers (2022-02-16T04:21:48Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z) - #MeToo on Campus: Studying College Sexual Assault at Scale Using Data
Reported on Social Media [71.74529365205053]
We analyze the influence of the # trend on a pool of college followers.
The results show that the majority of topics embedded in those # tweets detail sexual harassment stories.
There exists a significant correlation between the prevalence of this trend and official reports on several major geographical regions.
arXiv Detail & Related papers (2020-01-16T18:05:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.