Hate Speech and Offensive Language Detection using an Emotion-aware
Shared Encoder
- URL: http://arxiv.org/abs/2302.08777v1
- Date: Fri, 17 Feb 2023 09:31:06 GMT
- Title: Hate Speech and Offensive Language Detection using an Emotion-aware
Shared Encoder
- Authors: Khouloud Mnassri, Praboda Rajapaksha, Reza Farahbakhsh, Noel Crespi
- Abstract summary: Existing works on hate speech and offensive language detection produce promising results based on pre-trained transformer models.
This paper addresses a multi-task joint learning approach which combines external emotional features extracted from another corpora.
Our findings demonstrate that emotional knowledge helps to more reliably identify hate speech and offensive language across datasets.
- Score: 1.8734449181723825
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise of emergence of social media platforms has fundamentally altered how
people communicate, and among the results of these developments is an increase
in online use of abusive content. Therefore, automatically detecting this
content is essential for banning inappropriate information, and reducing
toxicity and violence on social media platforms. The existing works on hate
speech and offensive language detection produce promising results based on
pre-trained transformer models, however, they considered only the analysis of
abusive content features generated through annotated datasets. This paper
addresses a multi-task joint learning approach which combines external
emotional features extracted from another corpora in dealing with the
imbalanced and scarcity of labeled datasets. Our analysis are using two
well-known Transformer-based models, BERT and mBERT, where the later is used to
address abusive content detection in multi-lingual scenarios. Our model jointly
learns abusive content detection with emotional features by sharing
representations through transformers' shared encoder. This approach increases
data efficiency, reduce overfitting via shared representations, and ensure fast
learning by leveraging auxiliary information. Our findings demonstrate that
emotional knowledge helps to more reliably identify hate speech and offensive
language across datasets. Our hate speech detection Multi-task model exhibited
3% performance improvement over baseline models, but the performance of
multi-task models were not significant for offensive language detection task.
More interestingly, in both tasks, multi-task models exhibits less false
positive errors compared to single task scenario.
Related papers
- Improving the Robustness of Summarization Systems with Dual Augmentation [68.53139002203118]
A robust summarization system should be able to capture the gist of the document, regardless of the specific word choices or noise in the input.
We first explore the summarization models' robustness against perturbations including word-level synonym substitution and noise.
We propose a SummAttacker, which is an efficient approach to generating adversarial samples based on language models.
arXiv Detail & Related papers (2023-06-01T19:04:17Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - Beyond Contrastive Learning: A Variational Generative Model for
Multilingual Retrieval [109.62363167257664]
We propose a generative model for learning multilingual text embeddings.
Our model operates on parallel data in $N$ languages.
We evaluate this method on a suite of tasks including semantic similarity, bitext mining, and cross-lingual question retrieval.
arXiv Detail & Related papers (2022-12-21T02:41:40Z) - A New Generation of Perspective API: Efficient Multilingual
Character-level Transformers [66.9176610388952]
We present the fundamentals behind the next version of the Perspective API from Google Jigsaw.
At the heart of the approach is a single multilingual token-free Charformer model.
We demonstrate that by forgoing static vocabularies, we gain flexibility across a variety of settings.
arXiv Detail & Related papers (2022-02-22T20:55:31Z) - Addressing the Challenges of Cross-Lingual Hate Speech Detection [115.1352779982269]
In this paper we focus on cross-lingual transfer learning to support hate speech detection in low-resource languages.
We leverage cross-lingual word embeddings to train our neural network systems on the source language and apply it to the target language.
We investigate the issue of label imbalance of hate speech datasets, since the high ratio of non-hate examples compared to hate examples often leads to low model performance.
arXiv Detail & Related papers (2022-01-15T20:48:14Z) - Offensive Language and Hate Speech Detection with Deep Learning and
Transfer Learning [1.77356577919977]
We propose an approach to automatically classify tweets into three classes: Hate, offensive and Neither.
We create a class module which contains main functionality including text classification, sentiment checking and text data augmentation.
arXiv Detail & Related papers (2021-08-06T20:59:47Z) - VidLanKD: Improving Language Understanding via Video-Distilled Knowledge
Transfer [76.3906723777229]
We present VidLanKD, a video-language knowledge distillation method for improving language understanding.
We train a multi-modal teacher model on a video-text dataset, and then transfer its knowledge to a student language model with a text dataset.
In our experiments, VidLanKD achieves consistent improvements over text-only language models and vokenization models.
arXiv Detail & Related papers (2021-07-06T15:41:32Z) - AngryBERT: Joint Learning Target and Emotion for Hate Speech Detection [5.649040805759824]
This paper proposes a novel multitask learning-based model, AngryBERT, which jointly learns hate speech detection with sentiment classification and target identification as secondary relevant tasks.
Experiment results show that AngryBERT outperforms state-of-the-art single-task-learning and multitask learning baselines.
arXiv Detail & Related papers (2021-03-14T16:17:26Z) - An Online Multilingual Hate speech Recognition System [13.87667165678441]
We analyse six datasets by combining them into a single homogeneous dataset and classify them into three classes, abusive, hateful or neither.
We create a tool which identifies and scores a page with effective metric in near-real time and uses the same as feedback to re-train our model.
We prove the competitive performance of our multilingual model on two langauges, English and Hindi, leading to comparable or superior performance to most monolingual models.
arXiv Detail & Related papers (2020-11-23T16:33:48Z) - Transfer Learning for Hate Speech Detection in Social Media [14.759208309842178]
This paper uses a transfer learning technique to leverage two independent datasets jointly.
We build an interpretable two-dimensional visualization tool of the constructed hate speech representation -- dubbed the Map of Hate.
We show that the joint representation boosts prediction performances when only a limited amount of supervision is available.
arXiv Detail & Related papers (2019-06-10T08:00:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.