Twitter Users' Behavioral Response to Toxic Replies
- URL: http://arxiv.org/abs/2210.13420v1
- Date: Mon, 24 Oct 2022 17:36:58 GMT
- Title: Twitter Users' Behavioral Response to Toxic Replies
- Authors: Ana Aleksandric, Sayak Saha Roy, Shirin Nilizadeh
- Abstract summary: We studied the impact of toxicity on users' online behavior on Twitter.
We found that toxicity victims show a combination of the following behavioral reactions: avoidance, revenge, countermeasures, and negotiation.
Our results can assist further studies in developing more effective detection and intervention methods for reducing the negative consequences of toxicity on social media.
- Score: 1.2387676601792899
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Online toxic attacks, such as harassment, trolling, and hate speech have been
linked to an increase in offline violence and negative psychological effects on
victims. In this paper, we studied the impact of toxicity on users' online
behavior. We collected a sample of 79.8k Twitter conversations. Then, through a
longitudinal study, for nine weeks, we tracked and compared the behavioral
reactions of authors, who were toxicity victims, with those who were not. We
found that toxicity victims show a combination of the following behavioral
reactions: avoidance, revenge, countermeasures, and negotiation. We performed
statistical tests to understand the significance of the contribution of toxic
replies toward user behaviors while considering confounding factors, such as
the structure of conversations and the user accounts' visibility,
identifiability, and activity level. Interestingly, we found that compared to
other random authors, victims are more likely to engage in conversations, reply
in a toxic way, and unfollow toxicity instigators. Even if the toxicity is
directed at other participants, the root authors are more likely to engage in
the conversations and reply in a toxic way. However, victims who have verified
accounts are less likely to participate in conversations or respond by posting
toxic comments. In addition, replies are more likely to be removed in
conversations with a larger percentage of toxic nested replies and toxic
replies directed at other users. Our results can assist further studies in
developing more effective detection and intervention methods for reducing the
negative consequences of toxicity on social media.
Related papers
- Analyzing Toxicity in Deep Conversations: A Reddit Case Study [0.0]
This work employs a tree-based approach to understand how users behave concerning toxicity in public conversation settings.
We collect both the posts and the comment sections of the top 100 posts from 8 Reddit communities that allow profanity, totaling over 1 million responses.
We find that toxic comments increase the likelihood of subsequent toxic comments being produced in online conversations.
arXiv Detail & Related papers (2024-04-11T16:10:44Z) - Comprehensive Assessment of Toxicity in ChatGPT [49.71090497696024]
We evaluate the toxicity in ChatGPT by utilizing instruction-tuning datasets.
prompts in creative writing tasks can be 2x more likely to elicit toxic responses.
Certain deliberately toxic prompts, designed in earlier studies, no longer yield harmful responses.
arXiv Detail & Related papers (2023-11-03T14:37:53Z) - SQuARe: A Large-Scale Dataset of Sensitive Questions and Acceptable
Responses Created Through Human-Machine Collaboration [75.62448812759968]
This dataset is a large-scale Korean dataset of 49k sensitive questions with 42k acceptable and 46k non-acceptable responses.
The dataset was constructed leveraging HyperCLOVA in a human-in-the-loop manner based on real news headlines.
arXiv Detail & Related papers (2023-05-28T11:51:20Z) - Understanding the Bystander Effect on Toxic Twitter Conversations [1.1339580074756188]
We examine whether the toxicity of the first direct reply to a toxic tweet in conversations establishes the group norms for subsequent replies.
We analyze a random sample of more than 156k tweets belonging to 9k conversations.
arXiv Detail & Related papers (2022-11-19T18:31:39Z) - User Engagement and the Toxicity of Tweets [1.1339580074756188]
We analyze a random sample of more than 85,300 Twitter conversations to examine differences between toxic and non-toxic conversations.
We find that toxic conversations, those with at least one toxic tweet, are longer but have fewer individual users contributing to the dialogue compared to the non-toxic conversations.
We also find a relationship between the toxicity of the first reply to a toxic tweet and the toxicity of the conversation.
arXiv Detail & Related papers (2022-11-07T20:55:22Z) - Understanding Longitudinal Behaviors of Toxic Accounts on Reddit [7.090204155621651]
We present a study of 929K accounts that post toxic comments on Reddit over an 18month period.
These accounts posted over 14 million toxic comments that encompass insults, identity attacks, threats of violence, and sexual harassment.
Our analysis forms the foundation for new time-based and graph-based features that can improve automated detection of toxic behavior online.
arXiv Detail & Related papers (2022-09-06T14:35:44Z) - Toxicity Detection can be Sensitive to the Conversational Context [64.28043776806213]
We construct and publicly release a dataset of 10,000 posts with two kinds of toxicity labels.
We introduce a new task, context sensitivity estimation, which aims to identify posts whose perceived toxicity changes if the context is also considered.
arXiv Detail & Related papers (2021-11-19T13:57:26Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language
Models [93.151822563361]
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment.
We investigate the extent to which pretrained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration.
arXiv Detail & Related papers (2020-09-24T03:17:19Z) - ALONE: A Dataset for Toxic Behavior among Adolescents on Twitter [5.723363140737726]
This paper provides a dataset of toxic social media interactions between confirmed high school students, called ALONE (AdoLescents ON twittEr)
Nearly 66% of internet users have observed online harassment, and 41% claim personal experience, with 18% facing severe forms of online harassment.
Our observations show that individual tweets do not provide sufficient evidence for toxic behavior, and meaningful use of context in interactions can enable highlighting or exonerating tweets with purported toxicity.
arXiv Detail & Related papers (2020-08-14T17:02:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.