Polarized Patterns of Language Toxicity and Sentiment of Debunking Posts on Social Media
- URL: http://arxiv.org/abs/2501.06274v2
- Date: Fri, 31 Jan 2025 16:41:17 GMT
- Title: Polarized Patterns of Language Toxicity and Sentiment of Debunking Posts on Social Media
- Authors: Wentao Xu, Wenlu Fan, Shiqian Lu, Tenghao Li, Bin Wang,
- Abstract summary: The rise of misinformation and fake news in online political discourse poses significant challenges to democratic processes and public engagement.
We examined over 86 million debunking tweets and more than 4 million Reddit debunking comments to investigate the relationship between language toxicity, pessimism, and social polarization in debunking efforts.
We show that platform architecture affects informational complexity of user interactions, with Twitter promoting concentrated, uniform discourse and Reddit encouraging diverse, complex communication.
- Score: 5.301808480190602
- License:
- Abstract: The rise of misinformation and fake news in online political discourse poses significant challenges to democratic processes and public engagement. While debunking efforts aim to counteract misinformation and foster fact-based dialogue, these discussions often involve language toxicity and emotional polarization. We examined over 86 million debunking tweets and more than 4 million Reddit debunking comments to investigate the relationship between language toxicity, pessimism, and social polarization in debunking efforts. Focusing on discussions of the 2016 and 2020 U.S. presidential elections and the QAnon conspiracy theory, our analysis reveals three key findings: (1) peripheral participants (1-degree users) play a disproportionate role in shaping toxic discourse, driven by lower community accountability and emotional expression; (2) platform mechanisms significantly influence polarization, with Twitter amplifying partisan differences and Reddit fostering higher overall toxicity due to its structured, community-driven interactions; and (3) a negative correlation exists between language toxicity and pessimism, with increased interaction reducing toxicity, especially on Reddit. We show that platform architecture affects informational complexity of user interactions, with Twitter promoting concentrated, uniform discourse and Reddit encouraging diverse, complex communication. Our findings highlight the importance of user engagement patterns, platform dynamics, and emotional expressions in shaping polarization in debunking discourse. This study offers insights for policymakers and platform designers to mitigate harmful effects and promote healthier online discussions, with implications for understanding misinformation, hate speech, and political polarization in digital environments.
Related papers
- Toxic behavior silences online political conversations [0.0]
We investigate the hypothesis that individuals may refrain from expressing minority opinions publicly due to being exposed to toxic behavior.
Using hidden Markov models, we identify a latent state consistent with toxicity-driven silence.
Our findings offer insights into the intricacies of online political deliberation and emphasize the importance of considering self-censorship dynamics.
arXiv Detail & Related papers (2024-12-07T20:39:20Z) - Characterization of Political Polarized Users Attacked by Language Toxicity on Twitter [3.0367864044156088]
This study aims to provide a first exploration of the potential language toxicity flow among Left, Right and Center users.
More than 500M Twitter posts were examined.
It was discovered that Left users received much more toxic replies than Right and Center users.
arXiv Detail & Related papers (2024-07-17T10:49:47Z) - Twits, Toxic Tweets, and Tribal Tendencies: Trends in Politically Polarized Posts on Twitter [5.161088104035108]
We explore the role that partisanship and affective polarization play in contributing to toxicity on an individual level and a topic level on Twitter/X.
After collecting 89.6 million tweets from 43,151 Twitter/X users, we determine how several account-level characteristics, including partisanship, predict how often users post toxic content.
arXiv Detail & Related papers (2023-07-19T17:24:47Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - AI Chat Assistants can Improve Conversations about Divisive Topics [3.8583005413310625]
We present results of a large-scale experiment that demonstrates how online conversations can be improved with artificial intelligence tools.
We employ a large language model to make real-time, evidence-based recommendations intended to improve participants' perception of feeling understood in conversations.
We find that these interventions improve the reported quality of the conversation, reduce political divisiveness, and improve the tone, without systematically changing the content of the conversation or moving people's policy attitudes.
arXiv Detail & Related papers (2023-02-14T06:42:09Z) - Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers
and Hostile Intergroup Interactions on Reddit [66.09950457847242]
We study the activity of 5.97M Reddit users and 421M comments posted over 13 years.
We create a typology of relationships between political communities based on whether their users are toxic to each other.
arXiv Detail & Related papers (2022-11-25T22:17:07Z) - Adherence to Misinformation on Social Media Through Socio-Cognitive and
Group-Based Processes [79.79659145328856]
We argue that when misinformation proliferates, this happens because the social media environment enables adherence to misinformation.
We make the case that polarization and misinformation adherence are closely tied.
arXiv Detail & Related papers (2022-06-30T12:34:24Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Analysing Social Media Network Data with R: Semi-Automated Screening of
Users, Comments and Communication Patterns [0.0]
Communication on social media platforms is increasingly widespread across societies.
Fake news, hate speech and radicalizing elements are part of this modern form of communication.
A basic understanding of these mechanisms and communication patterns could help to counteract negative forms of communication.
arXiv Detail & Related papers (2020-11-26T14:52:01Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.