On the Evolution of (Hateful) Memes by Means of Multimodal Contrastive
Learning
- URL: http://arxiv.org/abs/2212.06573v2
- Date: Fri, 7 Jul 2023 14:24:04 GMT
- Title: On the Evolution of (Hateful) Memes by Means of Multimodal Contrastive
Learning
- Authors: Yiting Qu, Xinlei He, Shannon Pierson, Michael Backes, Yang Zhang,
Savvas Zannettou
- Abstract summary: We study how hateful memes are created by combining visual elements from multiple images or fusing textual information with a hateful image.
Using our framework on a dataset extracted from 4chan, we find 3.3K variants of the Happy Merchant meme.
We envision that our framework can be used to aid human moderators by flagging new variants of hateful memes.
- Score: 18.794226796466962
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The dissemination of hateful memes online has adverse effects on social media
platforms and the real world. Detecting hateful memes is challenging, one of
the reasons being the evolutionary nature of memes; new hateful memes can
emerge by fusing hateful connotations with other cultural ideas or symbols. In
this paper, we propose a framework that leverages multimodal contrastive
learning models, in particular OpenAI's CLIP, to identify targets of hateful
content and systematically investigate the evolution of hateful memes. We find
that semantic regularities exist in CLIP-generated embeddings that describe
semantic relationships within the same modality (images) or across modalities
(images and text). Leveraging this property, we study how hateful memes are
created by combining visual elements from multiple images or fusing textual
information with a hateful image. We demonstrate the capabilities of our
framework for analyzing the evolution of hateful memes by focusing on
antisemitic memes, particularly the Happy Merchant meme. Using our framework on
a dataset extracted from 4chan, we find 3.3K variants of the Happy Merchant
meme, with some linked to specific countries, persons, or organizations. We
envision that our framework can be used to aid human moderators by flagging new
variants of hateful memes so that moderators can manually verify them and
mitigate the problem of hateful content online.
Related papers
- Evolver: Chain-of-Evolution Prompting to Boost Large Multimodal Models for Hateful Meme Detection [49.122777764853055]
We explore the potential of Large Multimodal Models (LMMs) for hateful meme detection.
We propose Evolver, which incorporates LMMs via Chain-of-Evolution (CoE) Prompting.
Evolver simulates the evolving and expressing process of memes and reasons through LMMs in a step-by-step manner.
arXiv Detail & Related papers (2024-07-30T17:51:44Z) - What Makes a Meme a Meme? Identifying Memes for Memetics-Aware Dataset Creation [0.9217021281095907]
Multimodal Internet Memes are now a ubiquitous fixture in online discourse.
Memetics are the process by which memes are imitated and transformed into symbols.
We develop a meme identification protocol which distinguishes meme from non-memetic content by recognising the memetics within it.
arXiv Detail & Related papers (2024-07-16T15:48:36Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - A Template Is All You Meme [83.05919383106715]
We release a knowledge base of memes and information found on www.knowyourmeme.com, composed of more than 54,000 images.
We hypothesize that meme templates can be used to inject models with the context missing from previous approaches.
arXiv Detail & Related papers (2023-11-11T19:38:14Z) - Mapping Memes to Words for Multimodal Hateful Meme Classification [26.101116761577796]
Some memes take a malicious turn, promoting hateful content and perpetuating discrimination.
We propose a novel approach named ISSUES for multimodal hateful meme classification.
Our method achieves state-of-the-art results on the Hateful Memes Challenge and HarMeme datasets.
arXiv Detail & Related papers (2023-10-12T14:38:52Z) - DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally
Spreading Out Disinformation [72.18912216025029]
We present DisinfoMeme to help detect disinformation memes.
The dataset contains memes mined from Reddit covering three current topics: the COVID-19 pandemic, the Black Lives Matter movement, and veganism/vegetarianism.
arXiv Detail & Related papers (2022-05-25T09:54:59Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Feels Bad Man: Dissecting Automated Hateful Meme Detection Through the
Lens of Facebook's Challenge [10.775419935941008]
We assess the efficacy of current state-of-the-art multimodal machine learning models toward hateful meme detection.
We use two benchmark datasets comprising 12,140 and 10,567 images from 4chan's "Politically Incorrect" board (/pol/) and Facebook's Hateful Memes Challenge dataset.
We conduct three experiments to determine the importance of multimodality on classification performance, the influential capacity of fringe Web communities on mainstream social platforms and vice versa.
arXiv Detail & Related papers (2022-02-17T07:52:22Z) - Detecting Harmful Memes and Their Targets [27.25262711136056]
We present HarMeme, the first benchmark dataset, containing 3,544 memes related to COVID-19.
In the first stage, we labeled a meme as very harmful, partially harmful, or harmless; in the second stage, we further annotated the type of target(s) that each harmful meme points to.
The evaluation results using ten unimodal and multimodal models highlight the importance of using multimodal signals for both tasks.
arXiv Detail & Related papers (2021-09-24T17:11:42Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Multimodal Learning for Hateful Memes Detection [6.6881085567421605]
We propose a novel method that incorporates the image captioning process into the memes detection process.
Our model achieves promising results on the Hateful Memes Detection Challenge.
arXiv Detail & Related papers (2020-11-25T16:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.