Online Hate: Behavioural Dynamics and Relationship with Misinformation
- URL: http://arxiv.org/abs/2105.14005v1
- Date: Fri, 28 May 2021 17:30:51 GMT
- Title: Online Hate: Behavioural Dynamics and Relationship with Misinformation
- Authors: Matteo Cinelli, Andra\v{z} Pelicon, Igor Mozeti\v{c}, Walter
Quattrociocchi, Petra Kralj Novak, Fabiana Zollo
- Abstract summary: We perform hate speech detection on a corpus of more than one million comments on YouTube videos.
Our results show that, coherently with Godwin's law, online debates tend to degenerate towards increasingly toxic exchanges of views.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Online debates are often characterised by extreme polarisation and heated
discussions among users. The presence of hate speech online is becoming
increasingly problematic, making necessary the development of appropriate
countermeasures. In this work, we perform hate speech detection on a corpus of
more than one million comments on YouTube videos through a machine learning
model fine-tuned on a large set of hand-annotated data. Our analysis shows that
there is no evidence of the presence of "serial haters", intended as active
users posting exclusively hateful comments. Moreover, coherently with the echo
chamber hypothesis, we find that users skewed towards one of the two categories
of video channels (questionable, reliable) are more prone to use inappropriate,
violent, or hateful language within their opponents community. Interestingly,
users loyal to reliable sources use on average a more toxic language than their
counterpart. Finally, we find that the overall toxicity of the discussion
increases with its length, measured both in terms of number of comments and
time. Our results show that, coherently with Godwin's law, online debates tend
to degenerate towards increasingly toxic exchanges of views.
Related papers
- Analyzing Toxicity in Deep Conversations: A Reddit Case Study [0.0]
This work employs a tree-based approach to understand how users behave concerning toxicity in public conversation settings.
We collect both the posts and the comment sections of the top 100 posts from 8 Reddit communities that allow profanity, totaling over 1 million responses.
We find that toxic comments increase the likelihood of subsequent toxic comments being produced in online conversations.
arXiv Detail & Related papers (2024-04-11T16:10:44Z) - Twits, Toxic Tweets, and Tribal Tendencies: Trends in Politically Polarized Posts on Twitter [5.161088104035108]
We explore the role that partisanship and affective polarization play in contributing to toxicity on an individual level and a topic level on Twitter/X.
After collecting 89.6 million tweets from 43,151 Twitter/X users, we determine how several account-level characteristics, including partisanship, predict how often users post toxic content.
arXiv Detail & Related papers (2023-07-19T17:24:47Z) - Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Collective moderation of hate, toxicity, and extremity in online
discussions [1.114199733551736]
We analyze a large corpus of more than 130,000 discussions on Twitter over four years.
We identify different dimensions of discourse that might be related to the probability of hate speech in subsequent tweets.
We find that expressing simple opinions, not necessarily supported by facts, relates to the least hate in subsequent discussions.
arXiv Detail & Related papers (2023-03-01T09:35:26Z) - Non-Polar Opposites: Analyzing the Relationship Between Echo Chambers
and Hostile Intergroup Interactions on Reddit [66.09950457847242]
We study the activity of 5.97M Reddit users and 421M comments posted over 13 years.
We create a typology of relationships between political communities based on whether their users are toxic to each other.
arXiv Detail & Related papers (2022-11-25T22:17:07Z) - Quantifying How Hateful Communities Radicalize Online Users [2.378428291297535]
We measure the impact of joining fringe hateful communities in terms of hate speech propagated to the rest of the social network.
We use data from Reddit to assess the effect of joining one type of echo chamber: a digital community of like-minded users exhibiting hateful behavior.
We show that the harmful speech does not remain contained within the community.
arXiv Detail & Related papers (2022-09-19T01:13:29Z) - Annotators with Attitudes: How Annotator Beliefs And Identities Bias
Toxic Language Detection [75.54119209776894]
We investigate the effect of annotator identities (who) and beliefs (why) on toxic language annotations.
We consider posts with three characteristics: anti-Black language, African American English dialect, and vulgarity.
Our results show strong associations between annotator identity and beliefs and their ratings of toxicity.
arXiv Detail & Related papers (2021-11-15T18:58:20Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.