Characterization of Political Polarized Users Attacked by Language Toxicity on Twitter
- URL: http://arxiv.org/abs/2407.12471v1
- Date: Wed, 17 Jul 2024 10:49:47 GMT
- Title: Characterization of Political Polarized Users Attacked by Language Toxicity on Twitter
- Authors: Wentao Xu,
- Abstract summary: This study aims to provide a first exploration of the potential language toxicity flow among Left, Right and Center users.
More than 500M Twitter posts were examined.
It was discovered that Left users received much more toxic replies than Right and Center users.
- Score: 3.0367864044156088
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Understanding the dynamics of language toxicity on social media is important for us to investigate the propagation of misinformation and the development of echo chambers for political scenarios such as U.S. presidential elections. Recent research has used large-scale data to investigate the dynamics across social media platforms. However, research on the toxicity dynamics is not enough. This study aims to provide a first exploration of the potential language toxicity flow among Left, Right and Center users. Specifically, we aim to examine whether Left users were easier to be attacked by language toxicity. In this study, more than 500M Twitter posts were examined. It was discovered that Left users received much more toxic replies than Right and Center users.
Related papers
- Comprehensive Assessment of Toxicity in ChatGPT [49.71090497696024]
We evaluate the toxicity in ChatGPT by utilizing instruction-tuning datasets.
prompts in creative writing tasks can be 2x more likely to elicit toxic responses.
Certain deliberately toxic prompts, designed in earlier studies, no longer yield harmful responses.
arXiv Detail & Related papers (2023-11-03T14:37:53Z) - Twits, Toxic Tweets, and Tribal Tendencies: Trends in Politically
Polarized Posts on Twitter [4.357949911556638]
We explore the role that political ideology plays in contributing to toxicity both on an individual user level and a topic level on Twitter.
After collecting 187 million tweets from 55,415 Twitter users, we determine how several account-level characteristics, including political ideology and account age, predict how often each user posts toxic content.
arXiv Detail & Related papers (2023-07-19T17:24:47Z) - Sub-Standards and Mal-Practices: Misinformation's Role in Insular,
Polarized, and Toxic Interactions [4.357949911556638]
We examine the role of misinformation in sparking political incivility and toxicity on Reddit.
We find that Reddit comments in response to articles on websites known to spread misinformation are 71.4% more likely to be toxic than comments responding to authentic news articles.
arXiv Detail & Related papers (2023-01-27T01:32:22Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Toxic Language Detection in Social Media for Brazilian Portuguese: New
Dataset and Multilingual Analysis [4.251937086394346]
State-of-the-art BERT models were able to achieve 76% macro-F1 score using monolingual data in the binary case.
We show that large-scale monolingual data is still needed to create more accurate models.
arXiv Detail & Related papers (2020-10-09T13:05:19Z) - RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language
Models [93.151822563361]
Pretrained neural language models (LMs) are prone to generating racist, sexist, or otherwise toxic language which hinders their safe deployment.
We investigate the extent to which pretrained LMs can be prompted to generate toxic language, and the effectiveness of controllable text generation algorithms at preventing such toxic degeneration.
arXiv Detail & Related papers (2020-09-24T03:17:19Z) - Breaking the Communities: Characterizing community changing users using
text mining and graph machine learning on Twitter [0.0]
We study users who break their community on Twitter using natural language processing techniques and graph machine learning algorithms.
We collected 9 million Twitter messages from 1.5 million users and constructed the retweet networks.
We present a machine learning framework for social media users classification which detects "community breakers"
arXiv Detail & Related papers (2020-08-24T23:44:51Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.