Hate Speech Classification Using SVM and Naive BAYES
- URL: http://arxiv.org/abs/2204.07057v1
- Date: Mon, 21 Mar 2022 17:15:38 GMT
- Title: Hate Speech Classification Using SVM and Naive BAYES
- Authors: D.C Asogwa, C.I Chukwuneke, C.C Ngene, G.N Anigbogu
- Abstract summary: Many countries have developed laws to avoid online hate speech.
But as online content continues to grow, so does the spread of hate speech.
It is important to automatically process the online user contents to detect and remove hate speech.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The spread of hatred that was formerly limited to verbal communications has
rapidly moved over the Internet. Social media and community forums that allow
people to discuss and express their opinions are becoming platforms for the
spreading of hate messages. Many countries have developed laws to avoid online
hate speech. They hold the companies that run the social media responsible for
their failure to eliminate hate speech. But as online content continues to
grow, so does the spread of hate speech However, manual analysis of hate speech
on online platforms is infeasible due to the huge amount of data as it is
expensive and time consuming. Thus, it is important to automatically process
the online user contents to detect and remove hate speech from online media.
Many recent approaches suffer from interpretability problem which means that it
can be difficult to understand why the systems make the decisions they do.
Through this work, some solutions for the problem of automatic detection of
hate messages were proposed using Support Vector Machine (SVM) and Na\"ive
Bayes algorithms. This achieved near state-of-the-art performance while being
simpler and producing more easily interpretable decisions than other methods.
Empirical evaluation of this technique has resulted in a classification
accuracy of approximately 99% and 50% for SVM and NB respectively over the test
set.
Keywords: classification; hate speech; feature extraction, algorithm,
supervised learning
Related papers
- NLP Systems That Can't Tell Use from Mention Censor Counterspeech, but Teaching the Distinction Helps [43.40965978436158]
Counterspeech that refutes problematic content often mentions harmful language but is not harmful itself.
We show that even recent language models fail at distinguishing use from mention.
This failure propagates to two key downstream tasks: misinformation and hate speech detection.
arXiv Detail & Related papers (2024-04-02T05:36:41Z) - Towards Interpretable Hate Speech Detection using Large Language Model-extracted Rationales [15.458557611029518]
Social media platforms are a prominent arena for users to engage in interpersonal discussions and express opinions.
There arises a need to automatically identify and flag instances of hate speech.
We propose to use state-of-the-art Large Language Models (LLMs) to extract features in the form of rationales from the input text.
arXiv Detail & Related papers (2024-03-19T03:22:35Z) - An Investigation of Large Language Models for Real-World Hate Speech
Detection [46.15140831710683]
A major limitation of existing methods is that hate speech detection is a highly contextual problem.
Recently, large language models (LLMs) have demonstrated state-of-the-art performance in several natural language tasks.
Our study reveals that a meticulously crafted reasoning prompt can effectively capture the context of hate speech.
arXiv Detail & Related papers (2024-01-07T00:39:33Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Assessing the impact of contextual information in hate speech detection [0.48369513656026514]
We provide a novel corpus for contextualized hate speech detection based on user responses to news posts from media outlets on Twitter.
This corpus was collected in the Rioplatense dialectal variety of Spanish and focuses on hate speech associated with the COVID-19 pandemic.
arXiv Detail & Related papers (2022-10-02T09:04:47Z) - A Review of Challenges in Machine Learning based Automated Hate Speech
Detection [0.966840768820136]
We focus on challenges faced by machine learning or deep learning based solutions to hate speech identification.
At the top level, we distinguish between data level, model level, and human level challenges.
This survey will help researchers to design their solutions more efficiently in the domain of hate speech detection.
arXiv Detail & Related papers (2022-09-12T14:56:14Z) - Deep Learning for Hate Speech Detection: A Comparative Study [54.42226495344908]
We present here a large-scale empirical comparison of deep and shallow hate-speech detection methods.
Our goal is to illuminate progress in the area, and identify strengths and weaknesses in the current state-of-the-art.
In doing so we aim to provide guidance as to the use of hate-speech detection in practice, quantify the state-of-the-art, and identify future research directions.
arXiv Detail & Related papers (2022-02-19T03:48:20Z) - Addressing the Challenges of Cross-Lingual Hate Speech Detection [115.1352779982269]
In this paper we focus on cross-lingual transfer learning to support hate speech detection in low-resource languages.
We leverage cross-lingual word embeddings to train our neural network systems on the source language and apply it to the target language.
We investigate the issue of label imbalance of hate speech datasets, since the high ratio of non-hate examples compared to hate examples often leads to low model performance.
arXiv Detail & Related papers (2022-01-15T20:48:14Z) - Unsupervised Domain Adaptation for Hate Speech Detection Using a Data
Augmentation Approach [6.497816402045099]
We propose an unsupervised domain adaptation approach to augment labeled data for hate speech detection.
We show our approach improves Area under the Precision/Recall curve by as much as 42% and recall by as much as 278%.
arXiv Detail & Related papers (2021-07-27T15:01:22Z) - Hate speech detection using static BERT embeddings [0.9176056742068814]
Hate speech is emerging as a major concern, where it expresses abusive speech that targets specific group characteristics.
In this paper, we analyze the performance of hate speech detection by replacing or integrating the word embeddings.
In comparison to fine-tuned BERT, one metric that significantly improved is specificity.
arXiv Detail & Related papers (2021-06-29T16:17:10Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.