User Engagement and the Toxicity of Tweets
- URL: http://arxiv.org/abs/2211.03856v1
- Date: Mon, 7 Nov 2022 20:55:22 GMT
- Title: User Engagement and the Toxicity of Tweets
- Authors: Nazanin Salehabadi and Anne Groggel and Mohit Singhal and Sayak Saha
Roy and Shirin Nilizadeh
- Abstract summary: We analyze a random sample of more than 85,300 Twitter conversations to examine differences between toxic and non-toxic conversations.
We find that toxic conversations, those with at least one toxic tweet, are longer but have fewer individual users contributing to the dialogue compared to the non-toxic conversations.
We also find a relationship between the toxicity of the first reply to a toxic tweet and the toxicity of the conversation.
- Score: 1.1339580074756188
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Twitter is one of the most popular online micro-blogging and social
networking platforms. This platform allows individuals to freely express
opinions and interact with others regardless of geographic barriers. However,
with the good that online platforms offer, also comes the bad. Twitter and
other social networking platforms have created new spaces for incivility. With
the growing interest on the consequences of uncivil behavior online,
understanding how a toxic comment impacts online interactions is imperative. We
analyze a random sample of more than 85,300 Twitter conversations to examine
differences between toxic and non-toxic conversations and the relationship
between toxicity and user engagement. We find that toxic conversations, those
with at least one toxic tweet, are longer but have fewer individual users
contributing to the dialogue compared to the non-toxic conversations. However,
within toxic conversations, toxicity is positively associated with more
individual Twitter users participating in conversations. This suggests that
overall, more visible conversations are more likely to include toxic replies.
Additionally, we examine the sequencing of toxic tweets and its impact on
conversations. Toxic tweets often occur as the main tweet or as the first
reply, and lead to greater overall conversation toxicity. We also find a
relationship between the toxicity of the first reply to a toxic tweet and the
toxicity of the conversation, such that whether the first reply is toxic or
non-toxic sets the stage for the overall toxicity of the conversation,
following the idea that hate can beget hate.
Related papers
- Tracking Patterns in Toxicity and Antisocial Behavior Over User Lifetimes on Large Social Media Platforms [0.2630859234884723]
We analyze toxicity over a 14-year time span on nearly 500 million comments from Reddit and Wikipedia.
We find that the most toxic behavior on Reddit exhibited in aggregate by the most active users, and the most toxic behavior on Wikipedia exhibited in aggregate by the least active users.
arXiv Detail & Related papers (2024-07-12T15:45:02Z) - Comprehensive Assessment of Toxicity in ChatGPT [49.71090497696024]
We evaluate the toxicity in ChatGPT by utilizing instruction-tuning datasets.
prompts in creative writing tasks can be 2x more likely to elicit toxic responses.
Certain deliberately toxic prompts, designed in earlier studies, no longer yield harmful responses.
arXiv Detail & Related papers (2023-11-03T14:37:53Z) - Twits, Toxic Tweets, and Tribal Tendencies: Trends in Politically Polarized Posts on Twitter [5.161088104035108]
We explore the role that partisanship and affective polarization play in contributing to toxicity on an individual level and a topic level on Twitter/X.
After collecting 89.6 million tweets from 43,151 Twitter/X users, we determine how several account-level characteristics, including partisanship, predict how often users post toxic content.
arXiv Detail & Related papers (2023-07-19T17:24:47Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - Understanding the Bystander Effect on Toxic Twitter Conversations [1.1339580074756188]
We examine whether the toxicity of the first direct reply to a toxic tweet in conversations establishes the group norms for subsequent replies.
We analyze a random sample of more than 156k tweets belonging to 9k conversations.
arXiv Detail & Related papers (2022-11-19T18:31:39Z) - Twitter Users' Behavioral Response to Toxic Replies [1.2387676601792899]
We studied the impact of toxicity on users' online behavior on Twitter.
We found that toxicity victims show a combination of the following behavioral reactions: avoidance, revenge, countermeasures, and negotiation.
Our results can assist further studies in developing more effective detection and intervention methods for reducing the negative consequences of toxicity on social media.
arXiv Detail & Related papers (2022-10-24T17:36:58Z) - Beyond Plain Toxic: Detection of Inappropriate Statements on Flammable
Topics for the Russian Language [76.58220021791955]
We present two text collections labelled according to binary notion of inapropriateness and a multinomial notion of sensitive topic.
To objectivise the notion of inappropriateness, we define it in a data-driven way though crowdsourcing.
arXiv Detail & Related papers (2022-03-04T15:59:06Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - The Structure of Toxic Conversations on Twitter [10.983958397797847]
We study the relationship between structure and toxicity in conversations on Twitter.
At the individual level, we find that toxicity is spread across many low to moderately toxic users.
At the group level, we find that toxic conversations tend to have larger, wider, and deeper reply trees.
arXiv Detail & Related papers (2021-05-25T01:18:02Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.