Hate-CLIPper: Multimodal Hateful Meme Classification based on
Cross-modal Interaction of CLIP Features
- URL: http://arxiv.org/abs/2210.05916v2
- Date: Thu, 13 Oct 2022 07:20:23 GMT
- Title: Hate-CLIPper: Multimodal Hateful Meme Classification based on
Cross-modal Interaction of CLIP Features
- Authors: Gokul Karthik Kumar, Karthik Nandakumar
- Abstract summary: Hateful memes are a growing menace on social media.
detecting hateful memes requires careful consideration of both visual and textual information.
We propose the Hate-CLIPper architecture, which explicitly models the cross-modal interactions between the image and text representations.
- Score: 5.443781798915199
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Hateful memes are a growing menace on social media. While the image and its
corresponding text in a meme are related, they do not necessarily convey the
same meaning when viewed individually. Hence, detecting hateful memes requires
careful consideration of both visual and textual information. Multimodal
pre-training can be beneficial for this task because it effectively captures
the relationship between the image and the text by representing them in a
similar feature space. Furthermore, it is essential to model the interactions
between the image and text features through intermediate fusion. Most existing
methods either employ multimodal pre-training or intermediate fusion, but not
both. In this work, we propose the Hate-CLIPper architecture, which explicitly
models the cross-modal interactions between the image and text representations
obtained using Contrastive Language-Image Pre-training (CLIP) encoders via a
feature interaction matrix (FIM). A simple classifier based on the FIM
representation is able to achieve state-of-the-art performance on the Hateful
Memes Challenge (HMC) dataset with an AUROC of 85.8, which even surpasses the
human performance of 82.65. Experiments on other meme datasets such as
Propaganda Memes and TamilMemes also demonstrate the generalizability of the
proposed approach. Finally, we analyze the interpretability of the FIM
representation and show that cross-modal interactions can indeed facilitate the
learning of meaningful concepts. The code for this work is available at
https://github.com/gokulkarthik/hateclipper.
Related papers
Err
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.