Characterizing the Entities in Harmful Memes: Who is the Hero, the
Villain, the Victim?
- URL: http://arxiv.org/abs/2301.11219v2
- Date: Mon, 10 Apr 2023 20:07:38 GMT
- Title: Characterizing the Entities in Harmful Memes: Who is the Hero, the
Villain, the Victim?
- Authors: Shivam Sharma, Atharva Kulkarni, Tharun Suresh, Himanshi Mathur,
Preslav Nakov, Md. Shad Akhtar, Tanmoy Chakraborty
- Abstract summary: We aim to understand whether the meme glorifies, vilifies, or victimizes each entity it refers to.
Our proposed model achieves an improvement of 4% over the best baseline and 1% over the best competing stand-alone submission.
- Score: 39.55435707149863
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Memes can sway people's opinions over social media as they combine visual and
textual information in an easy-to-consume manner. Since memes instantly turn
viral, it becomes crucial to infer their intent and potentially associated
harmfulness to take timely measures as needed. A common problem associated with
meme comprehension lies in detecting the entities referenced and characterizing
the role of each of these entities. Here, we aim to understand whether the meme
glorifies, vilifies, or victimizes each entity it refers to. To this end, we
address the task of role identification of entities in harmful memes, i.e.,
detecting who is the 'hero', the 'villain', and the 'victim' in the meme, if
any. We utilize HVVMemes - a memes dataset on US Politics and Covid-19 memes,
released recently as part of the CONSTRAINT@ACL-2022 shared-task. It contains
memes, entities referenced, and their associated roles: hero, villain, victim,
and other. We further design VECTOR (Visual-semantic role dEteCToR), a robust
multi-modal framework for the task, which integrates entity-based contextual
information in the multi-modal representation and compare it to several
standard unimodal (text-only or image-only) or multi-modal (image+text) models.
Our experimental results show that our proposed model achieves an improvement
of 4% over the best baseline and 1% over the best competing stand-alone
submission from the shared-task. Besides divulging an extensive experimental
setup with comparative analyses, we finally highlight the challenges
encountered in addressing the complex task of semantic role labeling within
memes.
Related papers
- Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - Mapping Memes to Words for Multimodal Hateful Meme Classification [26.101116761577796]
Some memes take a malicious turn, promoting hateful content and perpetuating discrimination.
We propose a novel approach named ISSUES for multimodal hateful meme classification.
Our method achieves state-of-the-art results on the Hateful Memes Challenge and HarMeme datasets.
arXiv Detail & Related papers (2023-10-12T14:38:52Z) - DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally
Spreading Out Disinformation [72.18912216025029]
We present DisinfoMeme to help detect disinformation memes.
The dataset contains memes mined from Reddit covering three current topics: the COVID-19 pandemic, the Black Lives Matter movement, and veganism/vegetarianism.
arXiv Detail & Related papers (2022-05-25T09:54:59Z) - DISARM: Detecting the Victims Targeted by Harmful Memes [49.12165815990115]
DISARM is a framework that uses named entity recognition and person identification to detect harmful memes.
We show that DISARM significantly outperforms ten unimodal and multimodal systems.
It can reduce the relative error rate for harmful target identification by up to 9 points absolute over several strong multimodal rivals.
arXiv Detail & Related papers (2022-05-11T19:14:26Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Detecting Harmful Memes and Their Targets [27.25262711136056]
We present HarMeme, the first benchmark dataset, containing 3,544 memes related to COVID-19.
In the first stage, we labeled a meme as very harmful, partially harmful, or harmless; in the second stage, we further annotated the type of target(s) that each harmful meme points to.
The evaluation results using ten unimodal and multimodal models highlight the importance of using multimodal signals for both tasks.
arXiv Detail & Related papers (2021-09-24T17:11:42Z) - MOMENTA: A Multimodal Framework for Detecting Harmful Memes and Their
Targets [28.877314859737197]
We aim to solve two novel tasks: detecting harmful memes and identifying the social entities they target.
In particular, we aim to solve two novel tasks: detecting harmful memes and identifying the social entities they target.
We propose MOMENTA, a novel multimodal (text + image) deep neural model, which uses global and local perspectives to detect harmful memes.
arXiv Detail & Related papers (2021-09-11T04:29:32Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Multimodal Learning for Hateful Memes Detection [6.6881085567421605]
We propose a novel method that incorporates the image captioning process into the memes detection process.
Our model achieves promising results on the Hateful Memes Detection Challenge.
arXiv Detail & Related papers (2020-11-25T16:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.