The Structure of Toxic Conversations on Twitter
- URL: http://arxiv.org/abs/2105.11596v1
- Date: Tue, 25 May 2021 01:18:02 GMT
- Title: The Structure of Toxic Conversations on Twitter
- Authors: Martin Saveski, Brandon Roy, Deb Roy
- Abstract summary: We study the relationship between structure and toxicity in conversations on Twitter.
At the individual level, we find that toxicity is spread across many low to moderately toxic users.
At the group level, we find that toxic conversations tend to have larger, wider, and deeper reply trees.
- Score: 10.983958397797847
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Social media platforms promise to enable rich and vibrant conversations
online; however, their potential is often hindered by antisocial behaviors. In
this paper, we study the relationship between structure and toxicity in
conversations on Twitter. We collect 1.18M conversations (58.5M tweets, 4.4M
users) prompted by tweets that are posted by or mention major news outlets over
one year and candidates who ran in the 2018 US midterm elections over four
months. We analyze the conversations at the individual, dyad, and group level.
At the individual level, we find that toxicity is spread across many low to
moderately toxic users. At the dyad level, we observe that toxic replies are
more likely to come from users who do not have any social connection nor share
many common friends with the poster. At the group level, we find that toxic
conversations tend to have larger, wider, and deeper reply trees, but sparser
follow graphs. To test the predictive power of the conversational structure, we
consider two prediction tasks. In the first prediction task, we demonstrate
that the structural features can be used to predict whether the conversation
will become toxic as early as the first ten replies. In the second prediction
task, we show that the structural characteristics of the conversation are also
predictive of whether the next reply posted by a specific user will be toxic or
not. We observe that the structural and linguistic characteristics of the
conversations are complementary in both prediction tasks. Our findings inform
the design of healthier social media platforms and demonstrate that models
based on the structural characteristics of conversations can be used to detect
early signs of toxicity and potentially steer conversations in a less toxic
direction.
Related papers
- Comprehensive Assessment of Toxicity in ChatGPT [49.71090497696024]
We evaluate the toxicity in ChatGPT by utilizing instruction-tuning datasets.
prompts in creative writing tasks can be 2x more likely to elicit toxic responses.
Certain deliberately toxic prompts, designed in earlier studies, no longer yield harmful responses.
arXiv Detail & Related papers (2023-11-03T14:37:53Z) - Decoding the Silent Majority: Inducing Belief Augmented Social Graph
with Large Language Model for Response Forecasting [74.68371461260946]
SocialSense is a framework that induces a belief-centered graph on top of an existent social network, along with graph-based propagation to capture social dynamics.
Our method surpasses existing state-of-the-art in experimental evaluations for both zero-shot and supervised settings.
arXiv Detail & Related papers (2023-10-20T06:17:02Z) - NewsDialogues: Towards Proactive News Grounded Conversation [72.10055780635625]
We propose a novel task, Proactive News Grounded Conversation, in which a dialogue system can proactively lead the conversation based on some key topics of the news.
To further develop this novel task, we collect a human-to-human Chinese dialogue dataset tsNewsDialogues, which includes 1K conversations with a total of 14.6K utterances.
arXiv Detail & Related papers (2023-08-12T08:33:42Z) - Understanding Multi-Turn Toxic Behaviors in Open-Domain Chatbots [8.763670548363443]
A new attack, toxicbot, is developed to generate toxic responses in a multi-turn conversation.
toxicbot can be used by both industry and researchers to develop methods for detecting and mitigating toxic responses in conversational dialogue.
arXiv Detail & Related papers (2023-07-14T03:58:42Z) - Understanding the Bystander Effect on Toxic Twitter Conversations [1.1339580074756188]
We examine whether the toxicity of the first direct reply to a toxic tweet in conversations establishes the group norms for subsequent replies.
We analyze a random sample of more than 156k tweets belonging to 9k conversations.
arXiv Detail & Related papers (2022-11-19T18:31:39Z) - User Engagement and the Toxicity of Tweets [1.1339580074756188]
We analyze a random sample of more than 85,300 Twitter conversations to examine differences between toxic and non-toxic conversations.
We find that toxic conversations, those with at least one toxic tweet, are longer but have fewer individual users contributing to the dialogue compared to the non-toxic conversations.
We also find a relationship between the toxicity of the first reply to a toxic tweet and the toxicity of the conversation.
arXiv Detail & Related papers (2022-11-07T20:55:22Z) - Revisiting Contextual Toxicity Detection in Conversations [28.465019968374413]
We show that toxicity labelling by humans is in general influenced by the conversational structure, polarity and topic of the context.
We propose to bring these findings into computational detection models by introducing (a) neural architectures for contextual toxicity detection.
We have also demonstrated that such models can benefit from synthetic data, especially in the social media domain.
arXiv Detail & Related papers (2021-11-24T11:50:37Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Can You be More Social? Injecting Politeness and Positivity into
Task-Oriented Conversational Agents [60.27066549589362]
Social language used by human agents is associated with greater users' responsiveness and task completion.
The model uses a sequence-to-sequence deep learning architecture, extended with a social language understanding element.
Evaluation in terms of content preservation and social language level using both human judgment and automatic linguistic measures shows that the model can generate responses that enable agents to address users' issues in a more socially appropriate way.
arXiv Detail & Related papers (2020-12-29T08:22:48Z) - Using Sentiment Information for Preemptive Detection of Toxic Comments
in Online Conversations [0.0]
Some authors have tried to predict if a conversation will derail into toxicity using the features of the first few messages.
We show how the sentiments expressed in the first messages of a conversation can help predict upcoming toxicity.
arXiv Detail & Related papers (2020-06-17T20:41:57Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.