The Hateful Memes Challenge Next Move
- URL: http://arxiv.org/abs/2212.06655v1
- Date: Tue, 13 Dec 2022 15:37:53 GMT
- Title: The Hateful Memes Challenge Next Move
- Authors: Weijun Jin and Lance Wilhelm
- Abstract summary: Images embedded with text, such as hateful memes, are hard to classify using unimodal reasoning when difficult examples, such as benign confounders, are incorporated into the data set.
We attempt to generate more labeled memes in addition to the Hateful Memes data set from Facebook AI, based on the framework of a winning team from the Hateful Meme Challenge.
We find that the semi-supervised learning task on unlabeled data required human intervention and filtering and that adding a limited amount of new data yields no extra classification performance.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: State-of-the-art image and text classification models, such as Convectional
Neural Networks and Transformers, have long been able to classify their
respective unimodal reasoning satisfactorily with accuracy close to or
exceeding human accuracy. However, images embedded with text, such as hateful
memes, are hard to classify using unimodal reasoning when difficult examples,
such as benign confounders, are incorporated into the data set. We attempt to
generate more labeled memes in addition to the Hateful Memes data set from
Facebook AI, based on the framework of a winning team from the Hateful Meme
Challenge. To increase the number of labeled memes, we explore semi-supervised
learning using pseudo-labels for newly introduced, unlabeled memes gathered
from the Memotion Dataset 7K. We find that the semi-supervised learning task on
unlabeled data required human intervention and filtering and that adding a
limited amount of new data yields no extra classification performance.
Related papers
- Decoding Memes: A Comparative Study of Machine Learning Models for Template Identification [0.0]
"meme template" is a layout or format that is used to create memes.
Despite extensive research on meme virality, the task of automatically identifying meme templates remains a challenge.
This paper presents a comprehensive comparison and evaluation of existing meme template identification methods.
arXiv Detail & Related papers (2024-08-15T12:52:06Z) - XMeCap: Meme Caption Generation with Sub-Image Adaptability [53.2509590113364]
Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines.
We introduce the textscXMeCap framework, which adopts supervised fine-tuning and reinforcement learning.
textscXMeCap achieves an average evaluation score of 75.85 for single-image memes and 66.32 for multi-image memes, outperforming the best baseline by 3.71% and 4.82%, respectively.
arXiv Detail & Related papers (2024-07-24T10:51:46Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - A Template Is All You Meme [83.05919383106715]
We release a knowledge base of memes and information found on www.knowyourmeme.com, composed of more than 54,000 images.
We hypothesize that meme templates can be used to inject models with the context missing from previous approaches.
arXiv Detail & Related papers (2023-11-11T19:38:14Z) - Unimodal Intermediate Training for Multimodal Meme Sentiment
Classification [0.0]
We present a novel variant of supervised intermediate training that uses relatively abundant sentiment-labelled unimodal data.
Our results show a statistically significant performance improvement from the incorporation of unimodal text data.
We show that the training set of labelled memes can be reduced by 40% without reducing the performance of the downstream model.
arXiv Detail & Related papers (2023-08-01T13:14:10Z) - MemeFier: Dual-stage Modality Fusion for Image Meme Classification [8.794414326545697]
New forms of digital content such as image memes have given rise to spread of hate using multimodal means.
We propose MemeFier, a deep learning-based architecture for fine-grained classification of Internet image memes.
arXiv Detail & Related papers (2023-04-06T07:36:52Z) - DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally
Spreading Out Disinformation [72.18912216025029]
We present DisinfoMeme to help detect disinformation memes.
The dataset contains memes mined from Reddit covering three current topics: the COVID-19 pandemic, the Black Lives Matter movement, and veganism/vegetarianism.
arXiv Detail & Related papers (2022-05-25T09:54:59Z) - Caption Enriched Samples for Improving Hateful Memes Detection [78.5136090997431]
The hateful meme challenge demonstrates the difficulty of determining whether a meme is hateful or not.
Both unimodal language models and multimodal vision-language models cannot reach the human level of performance.
arXiv Detail & Related papers (2021-09-22T10:57:51Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Multimodal Learning for Hateful Memes Detection [6.6881085567421605]
We propose a novel method that incorporates the image captioning process into the memes detection process.
Our model achieves promising results on the Hateful Memes Detection Challenge.
arXiv Detail & Related papers (2020-11-25T16:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.