Hate Me Not: Detecting Hate Inducing Memes in Code Switched Languages
- URL: http://arxiv.org/abs/2204.11356v1
- Date: Sun, 24 Apr 2022 21:03:57 GMT
- Title: Hate Me Not: Detecting Hate Inducing Memes in Code Switched Languages
- Authors: Kshitij Rajput, Raghav Kapoor, Kaushal Rai, Preeti Kaur
- Abstract summary: In countries like India, where multiple languages are spoken, these abhorrent posts are from an unusual blend of code-switched languages.
This hate speech is depicted with the help of images to form "Memes" which create a long-lasting impact on the human mind.
We take up the task of hate and offense detection from multimodal data, i.e. images (Memes) that contain text in code-switched languages.
- Score: 1.376408511310322
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rise in the number of social media users has led to an increase in the
hateful content posted online. In countries like India, where multiple
languages are spoken, these abhorrent posts are from an unusual blend of
code-switched languages. This hate speech is depicted with the help of images
to form "Memes" which create a long-lasting impact on the human mind. In this
paper, we take up the task of hate and offense detection from multimodal data,
i.e. images (Memes) that contain text in code-switched languages. We firstly
present a novel triply annotated Indian political Memes (IPM) dataset, which
comprises memes from various Indian political events that have taken place
post-independence and are classified into three distinct categories. We also
propose a binary-channelled CNN cum LSTM based model to process the images
using the CNN model and text using the LSTM model to get state-of-the-art
results for this task.
Related papers
- An image speaks a thousand words, but can everyone listen? On image transcreation for cultural relevance [53.974497865647336]
We take a first step towards translating images to make them culturally relevant.
We build three pipelines comprising state-of-the-art generative models to do the task.
We conduct a human evaluation of translated images to assess for cultural relevance and meaning preservation.
arXiv Detail & Related papers (2024-04-01T17:08:50Z) - Deciphering Hate: Identifying Hateful Memes and Their Targets [4.574830585715128]
We introduce a novel dataset for detecting hateful memes in Bengali, BHM (Bengali Hateful Memes)
The dataset consists of 7,148 memes with Bengali as well as code-mixed captions, tailored for two tasks: (i) detecting hateful memes, and (ii) detecting the social entities they target.
To solve these tasks, we propose DORA, a multimodal deep neural network that systematically extracts the significant modality features from the memes.
arXiv Detail & Related papers (2024-03-16T06:39:41Z) - Mapping Memes to Words for Multimodal Hateful Meme Classification [26.101116761577796]
Some memes take a malicious turn, promoting hateful content and perpetuating discrimination.
We propose a novel approach named ISSUES for multimodal hateful meme classification.
Our method achieves state-of-the-art results on the Hateful Memes Challenge and HarMeme datasets.
arXiv Detail & Related papers (2023-10-12T14:38:52Z) - DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally
Spreading Out Disinformation [72.18912216025029]
We present DisinfoMeme to help detect disinformation memes.
The dataset contains memes mined from Reddit covering three current topics: the COVID-19 pandemic, the Black Lives Matter movement, and veganism/vegetarianism.
arXiv Detail & Related papers (2022-05-25T09:54:59Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - MOMENTA: A Multimodal Framework for Detecting Harmful Memes and Their
Targets [28.877314859737197]
We aim to solve two novel tasks: detecting harmful memes and identifying the social entities they target.
In particular, we aim to solve two novel tasks: detecting harmful memes and identifying the social entities they target.
We propose MOMENTA, a novel multimodal (text + image) deep neural model, which uses global and local perspectives to detect harmful memes.
arXiv Detail & Related papers (2021-09-11T04:29:32Z) - Exploiting BERT For Multimodal Target SentimentClassification Through
Input Space Translation [75.82110684355979]
We introduce a two-stream model that translates images in input space using an object-aware transformer.
We then leverage the translation to construct an auxiliary sentence that provides multimodal information to a language model.
We achieve state-of-the-art performance on two multimodal Twitter datasets.
arXiv Detail & Related papers (2021-08-03T18:02:38Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - NLP-CUET@DravidianLangTech-EACL2021: Investigating Visual and Textual
Features to Identify Trolls from Multimodal Social Media Memes [0.0]
A shared task is organized to develop models that can identify trolls from multimodal social media memes.
This work presents a computational model that we have developed as part of our participation in the task.
We investigated the visual and textual features using CNN, VGG16, Inception, Multilingual-BERT, XLM-Roberta, XLNet models.
arXiv Detail & Related papers (2021-02-28T11:36:50Z) - Multimodal Learning for Hateful Memes Detection [6.6881085567421605]
We propose a novel method that incorporates the image captioning process into the memes detection process.
Our model achieves promising results on the Hateful Memes Detection Challenge.
arXiv Detail & Related papers (2020-11-25T16:49:15Z) - Racism is a Virus: Anti-Asian Hate and Counterspeech in Social Media
during the COVID-19 Crisis [51.39895377836919]
COVID-19 has sparked racism and hate on social media targeted towards Asian communities.
We study the evolution and spread of anti-Asian hate speech through the lens of Twitter.
We create COVID-HATE, the largest dataset of anti-Asian hate and counterspeech spanning 14 months.
arXiv Detail & Related papers (2020-05-25T21:58:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.