ToxVis: Enabling Interpretability of Implicit vs. Explicit Toxicity
Detection Models with Interactive Visualization
- URL: http://arxiv.org/abs/2303.09402v1
- Date: Wed, 1 Mar 2023 17:24:15 GMT
- Title: ToxVis: Enabling Interpretability of Implicit vs. Explicit Toxicity
Detection Models with Interactive Visualization
- Authors: Uma Gunturi, Xiaohan Ding, Eugenia H. Rho
- Abstract summary: ToxVis is an interactive tool for classifying hate speech into three categories: implicit, explicit, and non-hateful.
ToxVis can serve as a resource for content moderators, social media platforms, and researchers working to combat the spread of hate speech online.
- Score: 7.0525662747824365
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise of hate speech on online platforms has led to an urgent need for
effective content moderation. However, the subjective and multi-faceted nature
of hateful online content, including implicit hate speech, poses significant
challenges to human moderators and content moderation systems. To address this
issue, we developed ToxVis, a visually interactive and explainable tool for
classifying hate speech into three categories: implicit, explicit, and
non-hateful. We fine-tuned two transformer-based models using RoBERTa, XLNET,
and GPT-3 and used deep learning interpretation techniques to provide
explanations for the classification results. ToxVis enables users to input
potentially hateful text and receive a classification result along with a
visual explanation of which words contributed most to the decision. By making
the classification process explainable, ToxVis provides a valuable tool for
understanding the nuances of hateful content and supporting more effective
content moderation. Our research contributes to the growing body of work aimed
at mitigating the harms caused by online hate speech and demonstrates the
potential for combining state-of-the-art natural language processing models
with interpretable deep learning techniques to address this critical issue.
Finally, ToxVis can serve as a resource for content moderators, social media
platforms, and researchers working to combat the spread of hate speech online.
Related papers
- Towards Interpretable Hate Speech Detection using Large Language Model-extracted Rationales [15.458557611029518]
Social media platforms are a prominent arena for users to engage in interpersonal discussions and express opinions.
There arises a need to automatically identify and flag instances of hate speech.
We propose to use state-of-the-art Large Language Models (LLMs) to extract features in the form of rationales from the input text.
arXiv Detail & Related papers (2024-03-19T03:22:35Z) - CoSyn: Detecting Implicit Hate Speech in Online Conversations Using a
Context Synergized Hyperbolic Network [52.85130555886915]
CoSyn is a context-synergized neural network that explicitly incorporates user- and conversational context for detecting implicit hate speech in online conversations.
We show that CoSyn outperforms all our baselines in detecting implicit hate speech with absolute improvements in the range of 1.24% - 57.8%.
arXiv Detail & Related papers (2023-03-02T17:30:43Z) - Countering Malicious Content Moderation Evasion in Online Social
Networks: Simulation and Detection of Word Camouflage [64.78260098263489]
Twisting and camouflaging keywords are among the most used techniques to evade platform content moderation systems.
This article contributes significantly to countering malicious information by developing multilingual tools to simulate and detect new methods of evasion of content.
arXiv Detail & Related papers (2022-12-27T16:08:49Z) - Assessing the impact of contextual information in hate speech detection [0.48369513656026514]
We provide a novel corpus for contextualized hate speech detection based on user responses to news posts from media outlets on Twitter.
This corpus was collected in the Rioplatense dialectal variety of Spanish and focuses on hate speech associated with the COVID-19 pandemic.
arXiv Detail & Related papers (2022-10-02T09:04:47Z) - A New Generation of Perspective API: Efficient Multilingual
Character-level Transformers [66.9176610388952]
We present the fundamentals behind the next version of the Perspective API from Google Jigsaw.
At the heart of the approach is a single multilingual token-free Charformer model.
We demonstrate that by forgoing static vocabularies, we gain flexibility across a variety of settings.
arXiv Detail & Related papers (2022-02-22T20:55:31Z) - Addressing the Challenges of Cross-Lingual Hate Speech Detection [115.1352779982269]
In this paper we focus on cross-lingual transfer learning to support hate speech detection in low-resource languages.
We leverage cross-lingual word embeddings to train our neural network systems on the source language and apply it to the target language.
We investigate the issue of label imbalance of hate speech datasets, since the high ratio of non-hate examples compared to hate examples often leads to low model performance.
arXiv Detail & Related papers (2022-01-15T20:48:14Z) - Textless Speech Emotion Conversion using Decomposed and Discrete
Representations [49.55101900501656]
We decompose speech into discrete and disentangled learned representations, consisting of content units, F0, speaker, and emotion.
First, we modify the speech content by translating the content units to a target emotion, and then predict the prosodic features based on these units.
Finally, the speech waveform is generated by feeding the predicted representations into a neural vocoder.
arXiv Detail & Related papers (2021-11-14T18:16:42Z) - Interpretable Multi-Modal Hate Speech Detection [32.36781061930129]
We propose a deep neural multi-modal model that can effectively capture the semantics of the text along with socio-cultural context in which a particular hate expression is made.
Our model is able to outperform the existing state-of-the-art hate speech classification approaches.
arXiv Detail & Related papers (2021-03-02T10:12:26Z) - "Notic My Speech" -- Blending Speech Patterns With Multimedia [65.91370924641862]
We propose a view-temporal attention mechanism to model both the view dependence and the visemic importance in speech recognition and understanding.
Our proposed method outperformed the existing work by 4.99% in terms of the viseme error rate.
We show that there is a strong correlation between our model's understanding of multi-view speech and the human perception.
arXiv Detail & Related papers (2020-06-12T06:51:55Z) - Investigating Deep Learning Approaches for Hate Speech Detection in
Social Media [20.974715256618754]
The misuse of freedom of expression has led to the increase of various cyber crimes and anti-social activities.
Hate speech is one such issue that needs to be addressed very seriously as otherwise, this could pose threats to the integrity of the social fabrics.
In this paper, we proposed deep learning approaches utilizing various embeddings for detecting various types of hate speeches in social media.
arXiv Detail & Related papers (2020-05-29T17:28:46Z) - Transfer Learning for Hate Speech Detection in Social Media [14.759208309842178]
This paper uses a transfer learning technique to leverage two independent datasets jointly.
We build an interpretable two-dimensional visualization tool of the constructed hate speech representation -- dubbed the Map of Hate.
We show that the joint representation boosts prediction performances when only a limited amount of supervision is available.
arXiv Detail & Related papers (2019-06-10T08:00:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.