Deciphering Implicit Hate: Evaluating Automated Detection Algorithms for
Multimodal Hate
- URL: http://arxiv.org/abs/2106.05903v1
- Date: Thu, 10 Jun 2021 16:29:42 GMT
- Title: Deciphering Implicit Hate: Evaluating Automated Detection Algorithms for
Multimodal Hate
- Authors: Austin Botelho and Bertie Vidgen and Scott A. Hale
- Abstract summary: This paper evaluates the role of semantic and multimodal context for detecting implicit and explicit hate.
We show that both text- and visual- enrichment improves model performance.
We find that all models perform better on content with full annotator agreement and that multimodal models are best at classifying the content where annotators disagree.
- Score: 2.68137173219451
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Accurate detection and classification of online hate is a difficult task.
Implicit hate is particularly challenging as such content tends to have unusual
syntax, polysemic words, and fewer markers of prejudice (e.g., slurs). This
problem is heightened with multimodal content, such as memes (combinations of
text and images), as they are often harder to decipher than unimodal content
(e.g., text alone). This paper evaluates the role of semantic and multimodal
context for detecting implicit and explicit hate. We show that both text- and
visual- enrichment improves model performance, with the multimodal model
(0.771) outperforming other models' F1 scores (0.544, 0.737, and 0.754). While
the unimodal-text context-aware (transformer) model was the most accurate on
the subtask of implicit hate detection, the multimodal model outperformed it
overall because of a lower propensity towards false positives. We find that all
models perform better on content with full annotator agreement and that
multimodal models are best at classifying the content where annotators
disagree. To conduct these investigations, we undertook high-quality annotation
of a sample of 5,000 multimodal entries. Tweets were annotated for primary
category, modality, and strategy. We make this corpus, along with the codebook,
code, and final model, freely available.
Related papers
- Leveraging Annotator Disagreement for Text Classification [3.6625157427847963]
It is common practice in text classification to only use one majority label for model training even if a dataset has been annotated by multiple annotators.
This paper proposes three strategies to leverage annotator disagreement for text classification: a probability-based multi-label method, an ensemble system, and instruction tuning.
arXiv Detail & Related papers (2024-09-26T06:46:53Z) - Adapting Dual-encoder Vision-language Models for Paraphrased Retrieval [55.90407811819347]
We consider the task of paraphrased text-to-image retrieval where a model aims to return similar results given a pair of paraphrased queries.
We train a dual-encoder model starting from a language model pretrained on a large text corpus.
Compared to public dual-encoder models such as CLIP and OpenCLIP, the model trained with our best adaptation strategy achieves a significantly higher ranking similarity for paraphrased queries.
arXiv Detail & Related papers (2024-05-06T06:30:17Z) - Text or Image? What is More Important in Cross-Domain Generalization
Capabilities of Hate Meme Detection Models? [2.4899077941924967]
This paper delves into the formidable challenge of cross-domain generalization in multimodal hate meme detection.
We provide enough pieces of evidence supporting the hypothesis that only the textual component of hateful memes enables the existing multimodal classifier to generalize across different domains.
Our evaluation on a newly created confounder dataset reveals higher performance on text confounders as compared to image confounders with an average $Delta$F1 of 0.18.
arXiv Detail & Related papers (2024-02-07T15:44:55Z) - Enhancing Multimodal Compositional Reasoning of Visual Language Models
with Generative Negative Mining [58.379339799777064]
Large-scale visual language models (VLMs) exhibit strong representation capacities, making them ubiquitous for enhancing image and text understanding tasks.
We propose a framework that not only mines in both directions but also generates challenging negative samples in both modalities.
Our code and dataset are released at https://ugorsahin.github.io/enhancing-multimodal-compositional-reasoning-of-vlm.html.
arXiv Detail & Related papers (2023-11-07T13:05:47Z) - Towards Better Multi-modal Keyphrase Generation via Visual Entity
Enhancement and Multi-granularity Image Noise Filtering [79.44443231700201]
Multi-modal keyphrase generation aims to produce a set of keyphrases that represent the core points of the input text-image pair.
The input text and image are often not perfectly matched, and thus the image may introduce noise into the model.
We propose a novel multi-modal keyphrase generation model, which not only enriches the model input with external knowledge, but also effectively filters image noise.
arXiv Detail & Related papers (2023-09-09T09:41:36Z) - ARC-NLP at Multimodal Hate Speech Event Detection 2023: Multimodal
Methods Boosted by Ensemble Learning, Syntactical and Entity Features [1.3190581566723918]
In the Russia-Ukraine war, both opposing factions heavily relied on text-embedded images as a vehicle for spreading propaganda and hate speech.
In this paper, we outline our methodologies for two subtasks of Multimodal Hate Speech Event Detection 2023.
For the first subtask, hate speech detection, we utilize multimodal deep learning models boosted by ensemble learning and syntactical text attributes.
For the second subtask, target detection, we employ multimodal deep learning models boosted by named entity features.
arXiv Detail & Related papers (2023-07-25T21:56:14Z) - Caption Enriched Samples for Improving Hateful Memes Detection [78.5136090997431]
The hateful meme challenge demonstrates the difficulty of determining whether a meme is hateful or not.
Both unimodal language models and multimodal vision-language models cannot reach the human level of performance.
arXiv Detail & Related papers (2021-09-22T10:57:51Z) - Exploiting BERT For Multimodal Target SentimentClassification Through
Input Space Translation [75.82110684355979]
We introduce a two-stream model that translates images in input space using an object-aware transformer.
We then leverage the translation to construct an auxiliary sentence that provides multimodal information to a language model.
We achieve state-of-the-art performance on two multimodal Twitter datasets.
arXiv Detail & Related papers (2021-08-03T18:02:38Z) - Detecting Hate Speech in Multi-modal Memes [14.036769355498546]
We focus on hate speech detection in multi-modal memes wherein memes pose an interesting multi-modal fusion problem.
We aim to solve the Facebook Meme Challenge citekiela 2020hateful which aims to solve a binary classification problem of predicting whether a meme is hateful or not.
arXiv Detail & Related papers (2020-12-29T18:30:00Z) - Video Understanding as Machine Translation [53.59298393079866]
We tackle a wide variety of downstream video understanding tasks by means of a single unified framework.
We report performance gains over the state-of-the-art on several downstream tasks including video classification (EPIC-Kitchens), question answering (TVQA), captioning (TVC, YouCook2, and MSR-VTT)
arXiv Detail & Related papers (2020-06-12T14:07:04Z) - The Hateful Memes Challenge: Detecting Hate Speech in Multimodal Memes [43.778346545763654]
This work proposes a new challenge set for multimodal classification, focusing on detecting hate speech in multimodal memes.
It is constructed such that unimodal models struggle and only multimodal models can succeed.
We find that state-of-the-art methods perform poorly compared to humans.
arXiv Detail & Related papers (2020-05-10T21:31:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.