Sub-Standards and Mal-Practices: Misinformation's Role in Insular, Polarized, and Toxic Interactions on Reddit
- URL: http://arxiv.org/abs/2301.11486v3
- Date: Wed, 30 Oct 2024 18:20:59 GMT
- Title: Sub-Standards and Mal-Practices: Misinformation's Role in Insular, Polarized, and Toxic Interactions on Reddit
- Authors: Hans W. A. Hanley, Zakir Durumeric,
- Abstract summary: We show that comments on articles from unreliable news websites are posted more often in right-leaning subreddits.
As the toxicity of subreddits increases, users are more likely to comment on posts from known unreliable websites.
- Score: 5.161088104035108
- License:
- Abstract: In this work, we examine the influence of unreliable information on political incivility and toxicity on the social media platform Reddit. We show that comments on articles from unreliable news websites are posted more often in right-leaning subreddits and that within individual subreddits, comments, on average, are 32% more likely to be toxic compared to comments on reliable news articles. Using a regression model, we show that these results hold after accounting for partisanship and baseline toxicity rates within individual subreddits. Utilizing a zero-inflated negative binomial regression, we further show that as the toxicity of subreddits increases, users are more likely to comment on posts from known unreliable websites. Finally, modeling user interactions with an exponential random graph model, we show that when reacting to a Reddit submission that links to a website known for spreading unreliable information, users are more likely to be toxic to users of different political beliefs. Our results collectively illustrate that low-quality/unreliable information not only predicts increased toxicity but also polarizing interactions between users of different political orientations.
Related papers
- Taming Toxicity or Fueling It? The Great Ban`s Role in Shifting Toxic User Behavior and Engagement [0.6918368994425961]
We evaluate the effectiveness of The Great Ban, one of the largest deplatforming interventions carried out by Reddit.
We analyzed 53M comments shared by nearly 34K users.
We found that 15.6% of the moderated users abandoned the platform while the remaining ones decreased their overall toxicity by 4.1%.
arXiv Detail & Related papers (2024-11-06T16:34:59Z) - Tracking Patterns in Toxicity and Antisocial Behavior Over User Lifetimes on Large Social Media Platforms [0.2630859234884723]
We analyze toxicity over a 14-year time span on nearly 500 million comments from Reddit and Wikipedia.
We find that the most toxic behavior on Reddit exhibited in aggregate by the most active users, and the most toxic behavior on Wikipedia exhibited in aggregate by the least active users.
arXiv Detail & Related papers (2024-07-12T15:45:02Z) - Twits, Toxic Tweets, and Tribal Tendencies: Trends in Politically Polarized Posts on Twitter [5.161088104035108]
We explore the role that partisanship and affective polarization play in contributing to toxicity on an individual level and a topic level on Twitter/X.
After collecting 89.6 million tweets from 43,151 Twitter/X users, we determine how several account-level characteristics, including partisanship, predict how often users post toxic content.
arXiv Detail & Related papers (2023-07-19T17:24:47Z) - Unveiling the Hidden Agenda: Biases in News Reporting and Consumption [59.55900146668931]
We build a six-year dataset on the Italian vaccine debate and adopt a Bayesian latent space model to identify narrative and selection biases.
We found a nonlinear relationship between biases and engagement, with higher engagement for extreme positions.
Analysis of news consumption on Twitter reveals common audiences among news outlets with similar ideological positions.
arXiv Detail & Related papers (2023-01-14T18:58:42Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Causal Understanding of Fake News Dissemination on Social Media [50.4854427067898]
We argue that it is critical to understand what user attributes potentially cause users to share fake news.
In fake news dissemination, confounders can be characterized by fake news sharing behavior that inherently relates to user attributes and online activities.
We propose a principled approach to alleviating selection bias in fake news dissemination.
arXiv Detail & Related papers (2020-10-20T19:37:04Z) - Do Platform Migrations Compromise Content Moderation? Evidence from
r/The_Donald and r/Incels [20.41491269475746]
We report the results of a large-scale observational study of how problematic online communities progress following community-level moderation measures.
Our results suggest that, in both cases, moderation measures significantly decreased posting activity on the new platform.
In spite of that, users in one of the studied communities showed increases in signals associated with toxicity and radicalization.
arXiv Detail & Related papers (2020-10-20T16:03:06Z) - Right and left, partisanship predicts (asymmetric) vulnerability to
misinformation [71.46564239895892]
We analyze the relationship between partisanship, echo chambers, and vulnerability to online misinformation by studying news sharing behavior on Twitter.
We find that vulnerability to misinformation is most strongly influenced by partisanship for both left- and right-leaning users.
arXiv Detail & Related papers (2020-10-04T01:36:14Z) - Information Consumption and Social Response in a Segregated Environment:
the Case of Gab [74.5095691235917]
This work provides a characterization of the interaction patterns within Gab around the COVID-19 topic.
We find that there are no strong statistical differences in the social response to questionable and reliable content.
Our results provide insights toward the understanding of coordinated inauthentic behavior and on the early-warning of information operation.
arXiv Detail & Related papers (2020-06-03T11:34:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.