A Comparison of Online Hate on Reddit and 4chan: A Case Study of the
2020 US Election
- URL: http://arxiv.org/abs/2202.01302v1
- Date: Wed, 2 Feb 2022 21:48:56 GMT
- Title: A Comparison of Online Hate on Reddit and 4chan: A Case Study of the
2020 US Election
- Authors: Fatima Zahrah and Jason R. C. Nurse and Michael Goldsmith
- Abstract summary: We make use of various Natural Language Processing (NLP) techniques to analyse hateful content from Reddit and 4chan relating to the 2020 US Presidential Elections.
Our findings show how content and posting activity can differ depending on the platform being used.
We provide initial comparison into the platform-specific behaviours of online hate, and how different platforms can serve specific purposes.
- Score: 2.685668802278155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid integration of the Internet into our daily lives has led to many
benefits but also to a number of new, wide-spread threats such as online hate,
trolling, bullying, and generally aggressive behaviours. While research has
traditionally explored online hate, in particular, on one platform, the reality
is that such hate is a phenomenon that often makes use of multiple online
networks. In this article, we seek to advance the discussion into online hate
by harnessing a comparative approach, where we make use of various Natural
Language Processing (NLP) techniques to computationally analyse hateful content
from Reddit and 4chan relating to the 2020 US Presidential Elections. Our
findings show how content and posting activity can differ depending on the
platform being used. Through this, we provide initial comparison into the
platform-specific behaviours of online hate, and how different platforms can
serve specific purposes. We further provide several avenues for future research
utilising a cross-platform approach so as to gain a more comprehensive
understanding of the global hate ecosystem.
Related papers
- Analyzing Norm Violations in Live-Stream Chat [49.120561596550395]
We study the first NLP study dedicated to detecting norm violations in conversations on live-streaming platforms.
We define norm violation categories in live-stream chats and annotate 4,583 moderated comments from Twitch.
Our results show that appropriate contextual information can boost moderation performance by 35%.
arXiv Detail & Related papers (2023-05-18T05:58:27Z) - Hatemongers ride on echo chambers to escalate hate speech diffusion [23.714548893849393]
We analyze more than 32 million posts from over 6.8 million users across three popular online social networks.
We find that hatemongers play a more crucial role in governing the spread of information compared to singled-out hateful content.
arXiv Detail & Related papers (2023-02-05T20:30:48Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - CRUSH: Contextually Regularized and User anchored Self-supervised Hate
speech Detection [6.759148939470331]
We introduce CRUSH, a framework for hate speech detection using user-anchored self-supervision and contextual regularization.
Our proposed approach secures 1-12% improvement in test set metrics over best performing previous approaches on two types of tasks and multiple popular english social media datasets.
arXiv Detail & Related papers (2022-04-13T13:51:51Z) - Hate Speech Classification Using SVM and Naive BAYES [0.0]
Many countries have developed laws to avoid online hate speech.
But as online content continues to grow, so does the spread of hate speech.
It is important to automatically process the online user contents to detect and remove hate speech.
arXiv Detail & Related papers (2022-03-21T17:15:38Z) - Identification of Twitter Bots based on an Explainable ML Framework: the
US 2020 Elections Case Study [72.61531092316092]
This paper focuses on the design of a novel system for identifying Twitter bots based on labeled Twitter data.
Supervised machine learning (ML) framework is adopted using an Extreme Gradient Boosting (XGBoost) algorithm.
Our study also deploys Shapley Additive Explanations (SHAP) for explaining the ML model predictions.
arXiv Detail & Related papers (2021-12-08T14:12:24Z) - News consumption and social media regulations policy [70.31753171707005]
We analyze two social media that enforced opposite moderation methods, Twitter and Gab, to assess the interplay between news consumption and content regulation.
Our results show that the presence of moderation pursued by Twitter produces a significant reduction of questionable content.
The lack of clear regulation on Gab results in the tendency of the user to engage with both types of content, showing a slight preference for the questionable ones which may account for a dissing/endorsement behavior.
arXiv Detail & Related papers (2021-06-07T19:26:32Z) - Detecting Harmful Content On Online Platforms: What Platforms Need Vs.
Where Research Efforts Go [44.774035806004214]
harmful content on online platforms comes in many different forms including hate speech, offensive language, bullying and harassment, misinformation, spam, violence, graphic content, sexual abuse, self harm, and many other.
Online platforms seek to moderate such content to limit societal harm, to comply with legislation, and to create a more inclusive environment for their users.
There is currently a dichotomy between what types of harmful content online platforms seek to curb, and what research efforts there are to automatically detect such content.
arXiv Detail & Related papers (2021-02-27T08:01:10Z) - Leveraging cross-platform data to improve automated hate speech
detection [0.0]
Most existing approaches for hate speech detection focus on a single social media platform in isolation.
Here we propose a new cross-platform approach to detect hate speech which leverages multiple datasets and classification models from different platforms.
We demonstrate how this approach outperforms existing models, and achieves good performance when tested on messages from novel social media platforms.
arXiv Detail & Related papers (2021-02-09T15:52:34Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.