Improving Multimodal Hateful Meme Detection Exploiting LMM-Generated Knowledge
- URL: http://arxiv.org/abs/2504.09914v1
- Date: Mon, 14 Apr 2025 06:23:44 GMT
- Title: Improving Multimodal Hateful Meme Detection Exploiting LMM-Generated Knowledge
- Authors: Maria Tzelepi, Vasileios Mezaris,
- Abstract summary: detecting hateful content in memes has emerged as a task of critical importance.<n>We propose to address the task leveraging knowledge encoded in powerful Large Multimodal Models (LMM)<n>Specifically, we propose to exploit LMMs in a two-fold manner. First, by extracting knowledge oriented to the hateful meme detection task in order to build strong meme representations.
- Score: 11.801596051153725
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Memes have become a dominant form of communication in social media in recent years. Memes are typically humorous and harmless, however there are also memes that promote hate speech, being in this way harmful to individuals and groups based on their identity. Therefore, detecting hateful content in memes has emerged as a task of critical importance. The need for understanding the complex interactions of images and their embedded text renders the hateful meme detection a challenging multimodal task. In this paper we propose to address the aforementioned task leveraging knowledge encoded in powerful Large Multimodal Models (LMM). Specifically, we propose to exploit LMMs in a two-fold manner. First, by extracting knowledge oriented to the hateful meme detection task in order to build strong meme representations. Specifically, generic semantic descriptions and emotions that the images along with their embedded texts elicit are extracted, which are then used to train a simple classification head for hateful meme detection. Second, by developing a novel hard mining approach introducing directly LMM-encoded knowledge to the training process, providing further improvements. We perform extensive experiments on two datasets that validate the effectiveness of the proposed method, achieving state-of-the-art performance. Our code and trained models are publicly available at: https://github.com/IDT-ITI/LMM-CLIP-meme.
Related papers
- Detecting and Mitigating Hateful Content in Multimodal Memes with Vision-Language Models [12.929357709840975]
Multimodal memes are sometimes misused to disseminate hate speech against individuals or groups.
We propose a definition-guided prompting technique for detecting hateful memes, and a unified framework for mitigating hateful content in memes, named UnHateMeme.
Our framework, integrated with Vision-Language Models, demonstrates a strong capability to convert hateful memes into non-hateful forms.
arXiv Detail & Related papers (2025-04-30T19:48:12Z) - Demystifying Hateful Content: Leveraging Large Multimodal Models for Hateful Meme Detection with Explainable Decisions [4.649093665157263]
In this paper, we introduce IntMeme, a novel framework that leverages Large Multimodal Models (LMMs) for hateful meme classification with explainable decisions.<n>IntMeme addresses the dual challenges of improving both accuracy and explainability in meme moderation.<n>Our approach addresses the opacity and misclassification issues associated with PT-VLMs, optimizing the use of LMMs for hateful meme detection.
arXiv Detail & Related papers (2025-02-16T10:45:40Z) - Evolver: Chain-of-Evolution Prompting to Boost Large Multimodal Models for Hateful Meme Detection [49.122777764853055]
We explore the potential of Large Multimodal Models (LMMs) for hateful meme detection.<n>We propose Evolver, which incorporates LMMs via Chain-of-Evolution (CoE) Prompting.<n>Evolver simulates the evolving and expressing process of memes and reasons through LMMs in a step-by-step manner.
arXiv Detail & Related papers (2024-07-30T17:51:44Z) - MemeMQA: Multimodal Question Answering for Memes via Rationale-Based Inferencing [53.30190591805432]
We introduce MemeMQA, a multimodal question-answering framework to solicit accurate responses to structured questions.
We also propose ARSENAL, a novel two-stage multimodal framework to address MemeMQA.
arXiv Detail & Related papers (2024-05-18T07:44:41Z) - Exploring the Limits of Zero Shot Vision Language Models for Hate Meme Detection: The Vulnerabilities and their Interpretations [9.970031080934003]
We study the effectiveness of modern day vision language models (VLMs) in handling intricate tasks such as hate meme detection.<n>We perform thorough prompt engineering and query state-of-the-art VLMs using various prompt types to detect hateful/harmful memes.
arXiv Detail & Related papers (2024-02-19T15:03:04Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - Beneath the Surface: Unveiling Harmful Memes with Multimodal Reasoning
Distilled from Large Language Models [17.617187709968242]
Existing harmful meme detection approaches only recognize superficial harm-indicative signals in an end-to-end classification manner.
We propose a novel generative framework to learn reasonable thoughts from Large Language Models for better multimodal fusion.
Our proposed approach achieves superior performance than state-of-the-art methods on the harmful meme detection task.
arXiv Detail & Related papers (2023-12-09T01:59:11Z) - A Template Is All You Meme [76.03172165923058]
We create a knowledge base composed of more than 5,200 meme templates, information about them, and 54,000 examples of template instances.<n>To investigate the semantic signal of meme templates, we show that we can match memes in datasets to base templates contained in our knowledge base with a distance-based lookup.<n>Our examination of meme templates results in state-of-the-art performance for every dataset we consider, paving the way for analysis grounded in templateness.
arXiv Detail & Related papers (2023-11-11T19:38:14Z) - Mapping Memes to Words for Multimodal Hateful Meme Classification [26.101116761577796]
Some memes take a malicious turn, promoting hateful content and perpetuating discrimination.
We propose a novel approach named ISSUES for multimodal hateful meme classification.
Our method achieves state-of-the-art results on the Hateful Memes Challenge and HarMeme datasets.
arXiv Detail & Related papers (2023-10-12T14:38:52Z) - DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally
Spreading Out Disinformation [72.18912216025029]
We present DisinfoMeme to help detect disinformation memes.
The dataset contains memes mined from Reddit covering three current topics: the COVID-19 pandemic, the Black Lives Matter movement, and veganism/vegetarianism.
arXiv Detail & Related papers (2022-05-25T09:54:59Z) - DISARM: Detecting the Victims Targeted by Harmful Memes [49.12165815990115]
DISARM is a framework that uses named entity recognition and person identification to detect harmful memes.
We show that DISARM significantly outperforms ten unimodal and multimodal systems.
It can reduce the relative error rate for harmful target identification by up to 9 points absolute over several strong multimodal rivals.
arXiv Detail & Related papers (2022-05-11T19:14:26Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Multimodal Learning for Hateful Memes Detection [6.6881085567421605]
We propose a novel method that incorporates the image captioning process into the memes detection process.
Our model achieves promising results on the Hateful Memes Detection Challenge.
arXiv Detail & Related papers (2020-11-25T16:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.