Twits, Toxic Tweets, and Tribal Tendencies: Trends in Politically
Polarized Posts on Twitter
- URL: http://arxiv.org/abs/2307.10349v1
- Date: Wed, 19 Jul 2023 17:24:47 GMT
- Title: Twits, Toxic Tweets, and Tribal Tendencies: Trends in Politically
Polarized Posts on Twitter
- Authors: Hans W. A. Hanley, Zakir Durumeric
- Abstract summary: We explore the role that political ideology plays in contributing to toxicity both on an individual user level and a topic level on Twitter.
After collecting 187 million tweets from 55,415 Twitter users, we determine how several account-level characteristics, including political ideology and account age, predict how often each user posts toxic content.
- Score: 4.357949911556638
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Social media platforms are often blamed for exacerbating political
polarization and worsening public dialogue. Many claim hyperpartisan users post
pernicious content, slanted to their political views, inciting contentious and
toxic conversations. However, what factors, actually contribute to increased
online toxicity and negative interactions? In this work, we explore the role
that political ideology plays in contributing to toxicity both on an individual
user level and a topic level on Twitter. To do this, we train and open-source a
DeBERTa-based toxicity detector with a contrastive objective that outperforms
the Google Jigsaw Persective Toxicity detector on the Civil Comments test
dataset. Then, after collecting 187 million tweets from 55,415 Twitter users,
we determine how several account-level characteristics, including political
ideology and account age, predict how often each user posts toxic content.
Running a linear regression, we find that the diversity of views and the
toxicity of the other accounts with which that user engages has a more marked
effect on their own toxicity. Namely, toxic comments are correlated with users
who engage with a wider array of political views. Performing topic analysis on
the toxic content posted by these accounts using the large language model MPNet
and a version of the DP-Means clustering algorithm, we find similar behavior
across 6,592 individual topics, with conversations on each topic becoming more
toxic as a wider diversity of users become involved.
Related papers
- Characterization of Political Polarized Users Attacked by Language Toxicity on Twitter [3.0367864044156088]
This study aims to provide a first exploration of the potential language toxicity flow among Left, Right and Center users.
More than 500M Twitter posts were examined.
It was discovered that Left users received much more toxic replies than Right and Center users.
arXiv Detail & Related papers (2024-07-17T10:49:47Z) - Tracking Patterns in Toxicity and Antisocial Behavior Over User Lifetimes on Large Social Media Platforms [0.2630859234884723]
We analyze toxicity over a 14-year time span on nearly 500 million comments from Reddit and Wikipedia.
We find that the most toxic behavior on Reddit exhibited in aggregate by the most active users, and the most toxic behavior on Wikipedia exhibited in aggregate by the least active users.
arXiv Detail & Related papers (2024-07-12T15:45:02Z) - Analyzing Toxicity in Deep Conversations: A Reddit Case Study [0.0]
This work employs a tree-based approach to understand how users behave concerning toxicity in public conversation settings.
We collect both the posts and the comment sections of the top 100 posts from 8 Reddit communities that allow profanity, totaling over 1 million responses.
We find that toxic comments increase the likelihood of subsequent toxic comments being produced in online conversations.
arXiv Detail & Related papers (2024-04-11T16:10:44Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers
and Hostile Intergroup Interactions on Reddit [66.09950457847242]
We study the activity of 5.97M Reddit users and 421M comments posted over 13 years.
We create a typology of relationships between political communities based on whether their users are toxic to each other.
arXiv Detail & Related papers (2022-11-25T22:17:07Z) - User Engagement and the Toxicity of Tweets [1.1339580074756188]
We analyze a random sample of more than 85,300 Twitter conversations to examine differences between toxic and non-toxic conversations.
We find that toxic conversations, those with at least one toxic tweet, are longer but have fewer individual users contributing to the dialogue compared to the non-toxic conversations.
We also find a relationship between the toxicity of the first reply to a toxic tweet and the toxicity of the conversation.
arXiv Detail & Related papers (2022-11-07T20:55:22Z) - Beyond Plain Toxic: Detection of Inappropriate Statements on Flammable
Topics for the Russian Language [76.58220021791955]
We present two text collections labelled according to binary notion of inapropriateness and a multinomial notion of sensitive topic.
To objectivise the notion of inappropriateness, we define it in a data-driven way though crowdsourcing.
arXiv Detail & Related papers (2022-03-04T15:59:06Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Designing Toxic Content Classification for a Diversity of Perspectives [15.466547856660803]
We survey 17,280 participants to understand how user expectations for what constitutes toxic content differ across demographics, beliefs, and personal experiences.
We find that groups historically at-risk of harassment are more likely to flag a random comment drawn from Reddit, Twitter, or 4chan as toxic.
We show how current one-size-fits-all toxicity classification algorithms, like the Perspective API from Jigsaw, can improve in accuracy by 86% on average through personalized model tuning.
arXiv Detail & Related papers (2021-06-04T16:45:15Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.