Detecting and Understanding Harmful Memes: A Survey
- URL: http://arxiv.org/abs/2205.04274v1
- Date: Mon, 9 May 2022 13:43:27 GMT
- Title: Detecting and Understanding Harmful Memes: A Survey
- Authors: Shivam Sharma, Firoj Alam, Md. Shad Akhtar, Dimitar Dimitrov, Giovanni
Da San Martino, Hamed Firooz, Alon Halevy, Fabrizio Silvestri, Preslav Nakov,
Tanmoy Chakraborty
- Abstract summary: We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
- Score: 48.135415967633676
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The automatic identification of harmful content online is of major concern
for social media platforms, policymakers, and society. Researchers have studied
textual, visual, and audio content, but typically in isolation. Yet, harmful
content often combines multiple modalities, as in the case of memes, which are
of particular interest due to their viral nature. With this in mind, here we
offer a comprehensive survey with a focus on harmful memes. Based on a
systematic analysis of recent literature, we first propose a new typology of
harmful memes, and then we highlight and summarize the relevant state of the
art. One interesting finding is that many types of harmful memes are not really
studied, e.g., such featuring self-harm and extremism, partly due to the lack
of suitable datasets. We further find that existing datasets mostly capture
multi-class scenarios, which are not inclusive of the affective spectrum that
memes can represent. Another observation is that memes can propagate globally
through repackaging in different languages and that they can also be
multilingual, blending different cultures. We conclude by highlighting several
challenges related to multimodal semiotics, technological constraints and
non-trivial social engagement, and we present several open-ended aspects such
as delineating online harm and empirically examining related frameworks and
assistive interventions, which we believe will motivate and drive future
research.
Related papers
- XMeCap: Meme Caption Generation with Sub-Image Adaptability [53.2509590113364]
Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines.
We introduce the textscXMeCap framework, which adopts supervised fine-tuning and reinforcement learning.
textscXMeCap achieves an average evaluation score of 75.85 for single-image memes and 66.32 for multi-image memes, outperforming the best baseline by 3.71% and 4.82%, respectively.
arXiv Detail & Related papers (2024-07-24T10:51:46Z) - MemeMQA: Multimodal Question Answering for Memes via Rationale-Based Inferencing [53.30190591805432]
We introduce MemeMQA, a multimodal question-answering framework to solicit accurate responses to structured questions.
We also propose ARSENAL, a novel two-stage multimodal framework to address MemeMQA.
arXiv Detail & Related papers (2024-05-18T07:44:41Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - PromptMTopic: Unsupervised Multimodal Topic Modeling of Memes using
Large Language Models [7.388466146105024]
We propose textPromptMTopic, a novel multimodal prompt-based model to learn topics from both text and visual modalities.
Our model effectively extracts and clusters topics learned from memes, considering the semantic interaction between the text and visual modalities.
Our work contributes to the understanding of the topics and themes of memes, a crucial form of communication in today's society.
arXiv Detail & Related papers (2023-12-11T03:36:50Z) - DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally
Spreading Out Disinformation [72.18912216025029]
We present DisinfoMeme to help detect disinformation memes.
The dataset contains memes mined from Reddit covering three current topics: the COVID-19 pandemic, the Black Lives Matter movement, and veganism/vegetarianism.
arXiv Detail & Related papers (2022-05-25T09:54:59Z) - DISARM: Detecting the Victims Targeted by Harmful Memes [49.12165815990115]
DISARM is a framework that uses named entity recognition and person identification to detect harmful memes.
We show that DISARM significantly outperforms ten unimodal and multimodal systems.
It can reduce the relative error rate for harmful target identification by up to 9 points absolute over several strong multimodal rivals.
arXiv Detail & Related papers (2022-05-11T19:14:26Z) - TeamX@DravidianLangTech-ACL2022: A Comparative Analysis for Troll-Based
Meme Classification [21.32190107220764]
harmful content online raised concerns among social media platforms, government agencies, policymakers, and society as a whole.
Among different harmful content textittrolling-based online content is one of them, where the idea is to post a message that is provocative, offensive, or menacing with an intent to mislead the audience.
This study provides a comparative analysis of troll-based memes classification using the textual, visual, and multimodal content.
arXiv Detail & Related papers (2022-05-09T16:19:28Z) - Detecting Harmful Memes and Their Targets [27.25262711136056]
We present HarMeme, the first benchmark dataset, containing 3,544 memes related to COVID-19.
In the first stage, we labeled a meme as very harmful, partially harmful, or harmless; in the second stage, we further annotated the type of target(s) that each harmful meme points to.
The evaluation results using ten unimodal and multimodal models highlight the importance of using multimodal signals for both tasks.
arXiv Detail & Related papers (2021-09-24T17:11:42Z) - MOMENTA: A Multimodal Framework for Detecting Harmful Memes and Their
Targets [28.877314859737197]
We aim to solve two novel tasks: detecting harmful memes and identifying the social entities they target.
In particular, we aim to solve two novel tasks: detecting harmful memes and identifying the social entities they target.
We propose MOMENTA, a novel multimodal (text + image) deep neural model, which uses global and local perspectives to detect harmful memes.
arXiv Detail & Related papers (2021-09-11T04:29:32Z) - A Multimodal Memes Classification: A Survey and Open Research Issues [4.504833177846264]
Many memes get uploaded each day on social media platforms that need automatic censoring to curb misinformation and hate.
This study aims to conduct a comprehensive study on memes classification, generally on the Visual-Linguistic (VL) multimodal problems and cutting edge solutions.
arXiv Detail & Related papers (2020-09-17T16:13:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.