MOMENTA: A Multimodal Framework for Detecting Harmful Memes and Their
Targets
- URL: http://arxiv.org/abs/2109.05184v1
- Date: Sat, 11 Sep 2021 04:29:32 GMT
- Title: MOMENTA: A Multimodal Framework for Detecting Harmful Memes and Their
Targets
- Authors: Shraman Pramanick, Shivam Sharma, Dimitar Dimitrov, Md Shad Akhtar,
Preslav Nakov and Tanmoy Chakraborty
- Abstract summary: We aim to solve two novel tasks: detecting harmful memes and identifying the social entities they target.
In particular, we aim to solve two novel tasks: detecting harmful memes and identifying the social entities they target.
We propose MOMENTA, a novel multimodal (text + image) deep neural model, which uses global and local perspectives to detect harmful memes.
- Score: 28.877314859737197
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Internet memes have become powerful means to transmit political,
psychological, and socio-cultural ideas. Although memes are typically humorous,
recent days have witnessed an escalation of harmful memes used for trolling,
cyberbullying, and abusing social entities. Detecting such harmful memes is
challenging as they can be highly satirical and cryptic. Moreover, while
previous work has focused on specific aspects of memes such as hate speech and
propaganda, there has been little work on harm in general, and only one
specialized dataset for it. Here, we focus on bridging this gap. In particular,
we aim to solve two novel tasks: detecting harmful memes and identifying the
social entities they target. We further extend the recently released HarMeme
dataset to generalize on two prevalent topics - COVID-19 and US politics and
name the two datasets as Harm-C and Harm-P, respectively. We then propose
MOMENTA (MultimOdal framework for detecting harmful MemEs aNd Their tArgets), a
novel multimodal (text + image) deep neural model, which uses global and local
perspectives to detect harmful memes. MOMENTA identifies the object proposals
and attributes and uses a multimodal model to perceive the comprehensive
context in which the objects and the entities are portrayed in a given meme.
MOMENTA is interpretable and generalizable, and it outperforms numerous
baselines.
Related papers
- Deciphering Hate: Identifying Hateful Memes and Their Targets [4.574830585715128]
We introduce a novel dataset for detecting hateful memes in Bengali, BHM (Bengali Hateful Memes)
The dataset consists of 7,148 memes with Bengali as well as code-mixed captions, tailored for two tasks: (i) detecting hateful memes, and (ii) detecting the social entities they target.
To solve these tasks, we propose DORA, a multimodal deep neural network that systematically extracts the significant modality features from the memes.
arXiv Detail & Related papers (2024-03-16T06:39:41Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - Characterizing the Entities in Harmful Memes: Who is the Hero, the
Villain, the Victim? [39.55435707149863]
We aim to understand whether the meme glorifies, vilifies, or victimizes each entity it refers to.
Our proposed model achieves an improvement of 4% over the best baseline and 1% over the best competing stand-alone submission.
arXiv Detail & Related papers (2023-01-26T16:55:15Z) - DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally
Spreading Out Disinformation [72.18912216025029]
We present DisinfoMeme to help detect disinformation memes.
The dataset contains memes mined from Reddit covering three current topics: the COVID-19 pandemic, the Black Lives Matter movement, and veganism/vegetarianism.
arXiv Detail & Related papers (2022-05-25T09:54:59Z) - DISARM: Detecting the Victims Targeted by Harmful Memes [49.12165815990115]
DISARM is a framework that uses named entity recognition and person identification to detect harmful memes.
We show that DISARM significantly outperforms ten unimodal and multimodal systems.
It can reduce the relative error rate for harmful target identification by up to 9 points absolute over several strong multimodal rivals.
arXiv Detail & Related papers (2022-05-11T19:14:26Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Detecting Harmful Memes and Their Targets [27.25262711136056]
We present HarMeme, the first benchmark dataset, containing 3,544 memes related to COVID-19.
In the first stage, we labeled a meme as very harmful, partially harmful, or harmless; in the second stage, we further annotated the type of target(s) that each harmful meme points to.
The evaluation results using ten unimodal and multimodal models highlight the importance of using multimodal signals for both tasks.
arXiv Detail & Related papers (2021-09-24T17:11:42Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Multimodal Learning for Hateful Memes Detection [6.6881085567421605]
We propose a novel method that incorporates the image captioning process into the memes detection process.
Our model achieves promising results on the Hateful Memes Detection Challenge.
arXiv Detail & Related papers (2020-11-25T16:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.