CRUSH: Contextually Regularized and User anchored Self-supervised Hate
speech Detection
- URL: http://arxiv.org/abs/2204.06389v2
- Date: Wed, 4 May 2022 12:53:15 GMT
- Title: CRUSH: Contextually Regularized and User anchored Self-supervised Hate
speech Detection
- Authors: Souvic Chakraborty, Parag Dutta, Sumegh Roychowdhury, Animesh
Mukherjee
- Abstract summary: We introduce CRUSH, a framework for hate speech detection using user-anchored self-supervision and contextual regularization.
Our proposed approach secures 1-12% improvement in test set metrics over best performing previous approaches on two types of tasks and multiple popular english social media datasets.
- Score: 6.759148939470331
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The last decade has witnessed a surge in the interaction of people through
social networking platforms. While there are several positive aspects of these
social platforms, the proliferation has led them to become the breeding ground
for cyber-bullying and hate speech. Recent advances in NLP have often been used
to mitigate the spread of such hateful content. Since the task of hate speech
detection is usually applicable in the context of social networks, we introduce
CRUSH, a framework for hate speech detection using user-anchored
self-supervision and contextual regularization. Our proposed approach secures ~
1-12% improvement in test set metrics over best performing previous approaches
on two types of tasks and multiple popular english social media datasets.
Related papers
- CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - BiasTestGPT: Using ChatGPT for Social Bias Testing of Language Models [73.29106813131818]
bias testing is currently cumbersome since the test sentences are generated from a limited set of manual templates or need expensive crowd-sourcing.
We propose using ChatGPT for the controllable generation of test sentences, given any arbitrary user-specified combination of social groups and attributes.
We present an open-source comprehensive bias testing framework (BiasTestGPT), hosted on HuggingFace, that can be plugged into any open-source PLM for bias testing.
arXiv Detail & Related papers (2023-02-14T22:07:57Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - Improved two-stage hate speech classification for twitter based on Deep
Neural Networks [0.0]
Hate speech is a form of online harassment that involves the use of abusive language.
The model we propose in this work is an extension of an existing approach based on LSTM neural network architectures.
Our study includes a performance comparison of several proposed alternative methods for the second stage evaluated on a public corpus of 16k tweets.
arXiv Detail & Related papers (2022-06-08T20:57:41Z) - Deep Learning for Hate Speech Detection: A Comparative Study [54.42226495344908]
We present here a large-scale empirical comparison of deep and shallow hate-speech detection methods.
Our goal is to illuminate progress in the area, and identify strengths and weaknesses in the current state-of-the-art.
In doing so we aim to provide guidance as to the use of hate-speech detection in practice, quantify the state-of-the-art, and identify future research directions.
arXiv Detail & Related papers (2022-02-19T03:48:20Z) - Going Extreme: Comparative Analysis of Hate Speech in Parler and Gab [2.487445341407889]
We provide the first large scale analysis of hate-speech on Parler.
In order to improve classification accuracy we annotated 10K Parler posts.
We find that hate mongers make 16.1% of Parler active users.
arXiv Detail & Related papers (2022-01-27T19:29:17Z) - Addressing the Challenges of Cross-Lingual Hate Speech Detection [115.1352779982269]
In this paper we focus on cross-lingual transfer learning to support hate speech detection in low-resource languages.
We leverage cross-lingual word embeddings to train our neural network systems on the source language and apply it to the target language.
We investigate the issue of label imbalance of hate speech datasets, since the high ratio of non-hate examples compared to hate examples often leads to low model performance.
arXiv Detail & Related papers (2022-01-15T20:48:14Z) - Leveraging Transformers for Hate Speech Detection in Conversational
Code-Mixed Tweets [36.29939722039909]
This paper describes the system proposed by team MIDAS-IIITD for HASOC 2021 subtask 2.
It is one of the first shared tasks focusing on detecting hate speech from Hindi-English code-mixed conversations on Twitter.
Our best performing system, a hard voting ensemble of Indic-BERT, XLM-RoBERTa, and Multilingual BERT, achieved a macro F1 score of 0.7253.
arXiv Detail & Related papers (2021-12-18T19:27:33Z) - DeepHate: Hate Speech Detection via Multi-Faceted Text Representations [8.192671048046687]
DeepHate is a novel deep learning model that combines multi-faceted text representations such as word embeddings, sentiments, and topical information.
We conduct extensive experiments and evaluate DeepHate on three large publicly available real-world datasets.
arXiv Detail & Related papers (2021-03-14T16:11:30Z) - Detecting Online Hate Speech: Approaches Using Weak Supervision and
Network Embedding Models [2.3322477552758234]
We propose a weak supervision deep learning model that quantitatively uncover hateful users and (ii) present a novel qualitative analysis to uncover indirect hateful conversations.
We evaluate our model on 19.2M posts and show that our weak supervision model outperforms the baseline models in identifying indirect hateful interactions.
We also analyze a multilayer network, constructed from two types of user interactions in Gab(quote and reply) and interaction scores from the weak supervision model as edge weights, to predict hateful users.
arXiv Detail & Related papers (2020-07-24T18:13:52Z) - Echo Chambers on Social Media: A comparative analysis [64.2256216637683]
We introduce an operational definition of echo chambers and perform a massive comparative analysis on 1B pieces of contents produced by 1M users on four social media platforms.
We infer the leaning of users about controversial topics and reconstruct their interaction networks by analyzing different features.
We find support for the hypothesis that platforms implementing news feed algorithms like Facebook may elicit the emergence of echo-chambers.
arXiv Detail & Related papers (2020-04-20T20:00:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.