Joint Modelling of Emotion and Abusive Language Detection
- URL: http://arxiv.org/abs/2005.14028v1
- Date: Thu, 28 May 2020 14:08:40 GMT
- Title: Joint Modelling of Emotion and Abusive Language Detection
- Authors: Santhosh Rajamanickam, Pushkar Mishra, Helen Yannakoudakis, Ekaterina
Shutova
- Abstract summary: We present the first joint model of emotion and abusive language detection, experimenting in a multi-task learning framework.
Our results demonstrate that incorporating affective features leads to significant improvements in abuse detection performance across datasets.
- Score: 26.18171134454037
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rise of online communication platforms has been accompanied by some
undesirable effects, such as the proliferation of aggressive and abusive
behaviour online. Aiming to tackle this problem, the natural language
processing (NLP) community has experimented with a range of techniques for
abuse detection. While achieving substantial success, these methods have so far
only focused on modelling the linguistic properties of the comments and the
online communities of users, disregarding the emotional state of the users and
how this might affect their language. The latter is, however, inextricably
linked to abusive behaviour. In this paper, we present the first joint model of
emotion and abusive language detection, experimenting in a multi-task learning
framework that allows one task to inform the other. Our results demonstrate
that incorporating affective features leads to significant improvements in
abuse detection performance across datasets.
Related papers
- Multimodal Fusion with LLMs for Engagement Prediction in Natural Conversation [70.52558242336988]
We focus on predicting engagement in dyadic interactions by scrutinizing verbal and non-verbal cues, aiming to detect signs of disinterest or confusion.
In this work, we collect a dataset featuring 34 participants engaged in casual dyadic conversations, each providing self-reported engagement ratings at the end of each conversation.
We introduce a novel fusion strategy using Large Language Models (LLMs) to integrate multiple behavior modalities into a multimodal transcript''
arXiv Detail & Related papers (2024-09-13T18:28:12Z) - Modulating Language Model Experiences through Frictions [56.17593192325438]
Over-consumption of language model outputs risks propagating unchecked errors in the short-term and damaging human capabilities in the long-term.
We propose selective frictions for language model experiences, inspired by behavioral science interventions, to dampen misuse.
arXiv Detail & Related papers (2024-06-24T16:31:11Z) - Examining Temporal Bias in Abusive Language Detection [3.465144840147315]
Machine learning models have been developed to automatically detect abusive language.
These models can suffer from temporal bias, the phenomenon in which topics, language use or social norms change over time.
This study investigates the nature and impact of temporal bias in abusive language detection across various languages.
arXiv Detail & Related papers (2023-09-25T13:59:39Z) - Fine-Tuning Llama 2 Large Language Models for Detecting Online Sexual
Predatory Chats and Abusive Texts [2.406214748890827]
This paper proposes an approach to detection of online sexual predatory chats and abusive language using the open-source pretrained Llama 2 7B- parameter model.
We fine-tune the LLM using datasets with different sizes, imbalance degrees, and languages (i.e., English, Roman Urdu and Urdu)
Experimental results show a strong performance of the proposed approach, which performs proficiently and consistently across three distinct datasets.
arXiv Detail & Related papers (2023-08-28T16:18:50Z) - Hate Speech and Offensive Language Detection using an Emotion-aware
Shared Encoder [1.8734449181723825]
Existing works on hate speech and offensive language detection produce promising results based on pre-trained transformer models.
This paper addresses a multi-task joint learning approach which combines external emotional features extracted from another corpora.
Our findings demonstrate that emotional knowledge helps to more reliably identify hate speech and offensive language across datasets.
arXiv Detail & Related papers (2023-02-17T09:31:06Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - Enriching Abusive Language Detection with Community Context [0.3708656266586145]
Use of pejorative expressions can be benign or actively empowering.
Models for abuse detection misclassify these expressions as derogatory, inadvertently censor productive conversations held by marginalized groups.
Our paper highlights how community context can improve classification outcomes in abusive language detection.
arXiv Detail & Related papers (2022-06-16T20:54:02Z) - A New Generation of Perspective API: Efficient Multilingual
Character-level Transformers [66.9176610388952]
We present the fundamentals behind the next version of the Perspective API from Google Jigsaw.
At the heart of the approach is a single multilingual token-free Charformer model.
We demonstrate that by forgoing static vocabularies, we gain flexibility across a variety of settings.
arXiv Detail & Related papers (2022-02-22T20:55:31Z) - Towards Unbiased Visual Emotion Recognition via Causal Intervention [63.74095927462]
We propose a novel Emotion Recognition Network (IERN) to alleviate the negative effects brought by the dataset bias.
A series of designed tests validate the effectiveness of IERN, and experiments on three emotion benchmarks demonstrate that IERN outperforms other state-of-the-art approaches.
arXiv Detail & Related papers (2021-07-26T10:40:59Z) - The User behind the Abuse: A Position on Ethics and Explainability [25.791014642037585]
We discuss the role that modeling of users and online communities plays in abuse detection.
We then explore the ethical challenges of incorporating user and community information.
We propose properties that an explainable method should aim to exhibit.
arXiv Detail & Related papers (2021-03-31T16:20:37Z) - On Negative Interference in Multilingual Models: Findings and A
Meta-Learning Treatment [59.995385574274785]
We show that, contrary to previous belief, negative interference also impacts low-resource languages.
We present a meta-learning algorithm that obtains better cross-lingual transferability and alleviates negative interference.
arXiv Detail & Related papers (2020-10-06T20:48:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.