DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally
Spreading Out Disinformation
- URL: http://arxiv.org/abs/2205.12617v1
- Date: Wed, 25 May 2022 09:54:59 GMT
- Title: DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally
Spreading Out Disinformation
- Authors: Jingnong Qu, Liunian Harold Li, Jieyu Zhao, Sunipa Dev, Kai-Wei Chang
- Abstract summary: We present DisinfoMeme to help detect disinformation memes.
The dataset contains memes mined from Reddit covering three current topics: the COVID-19 pandemic, the Black Lives Matter movement, and veganism/vegetarianism.
- Score: 72.18912216025029
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Disinformation has become a serious problem on social media. In particular,
given their short format, visual attraction, and humorous nature, memes have a
significant advantage in dissemination among online communities, making them an
effective vehicle for the spread of disinformation. We present DisinfoMeme to
help detect disinformation memes. The dataset contains memes mined from Reddit
covering three current topics: the COVID-19 pandemic, the Black Lives Matter
movement, and veganism/vegetarianism. The dataset poses multiple unique
challenges: limited data and label imbalance, reliance on external knowledge,
multimodal reasoning, layout dependency, and noise from OCR. We test multiple
widely-used unimodal and multimodal models on this dataset. The experiments
show that the room for improvement is still huge for current models.
Related papers
- Evolver: Chain-of-Evolution Prompting to Boost Large Multimodal Models for Hateful Meme Detection [49.122777764853055]
We explore the potential of Large Multimodal Models (LMMs) for hateful meme detection.
We propose Evolver, which incorporates LMMs via Chain-of-Evolution (CoE) Prompting.
Evolver simulates the evolving and expressing process of memes and reasons through LMMs in a step-by-step manner.
arXiv Detail & Related papers (2024-07-30T17:51:44Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - Characterizing the Entities in Harmful Memes: Who is the Hero, the
Villain, the Victim? [39.55435707149863]
We aim to understand whether the meme glorifies, vilifies, or victimizes each entity it refers to.
Our proposed model achieves an improvement of 4% over the best baseline and 1% over the best competing stand-alone submission.
arXiv Detail & Related papers (2023-01-26T16:55:15Z) - The Hateful Memes Challenge Next Move [0.0]
Images embedded with text, such as hateful memes, are hard to classify using unimodal reasoning when difficult examples, such as benign confounders, are incorporated into the data set.
We attempt to generate more labeled memes in addition to the Hateful Memes data set from Facebook AI, based on the framework of a winning team from the Hateful Meme Challenge.
We find that the semi-supervised learning task on unlabeled data required human intervention and filtering and that adding a limited amount of new data yields no extra classification performance.
arXiv Detail & Related papers (2022-12-13T15:37:53Z) - DISARM: Detecting the Victims Targeted by Harmful Memes [49.12165815990115]
DISARM is a framework that uses named entity recognition and person identification to detect harmful memes.
We show that DISARM significantly outperforms ten unimodal and multimodal systems.
It can reduce the relative error rate for harmful target identification by up to 9 points absolute over several strong multimodal rivals.
arXiv Detail & Related papers (2022-05-11T19:14:26Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Feels Bad Man: Dissecting Automated Hateful Meme Detection Through the
Lens of Facebook's Challenge [10.775419935941008]
We assess the efficacy of current state-of-the-art multimodal machine learning models toward hateful meme detection.
We use two benchmark datasets comprising 12,140 and 10,567 images from 4chan's "Politically Incorrect" board (/pol/) and Facebook's Hateful Memes Challenge dataset.
We conduct three experiments to determine the importance of multimodality on classification performance, the influential capacity of fringe Web communities on mainstream social platforms and vice versa.
arXiv Detail & Related papers (2022-02-17T07:52:22Z) - MOMENTA: A Multimodal Framework for Detecting Harmful Memes and Their
Targets [28.877314859737197]
We aim to solve two novel tasks: detecting harmful memes and identifying the social entities they target.
In particular, we aim to solve two novel tasks: detecting harmful memes and identifying the social entities they target.
We propose MOMENTA, a novel multimodal (text + image) deep neural model, which uses global and local perspectives to detect harmful memes.
arXiv Detail & Related papers (2021-09-11T04:29:32Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Multimodal Learning for Hateful Memes Detection [6.6881085567421605]
We propose a novel method that incorporates the image captioning process into the memes detection process.
Our model achieves promising results on the Hateful Memes Detection Challenge.
arXiv Detail & Related papers (2020-11-25T16:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.