Detection of Propaganda Techniques in Visuo-Lingual Metaphor in Memes
- URL: http://arxiv.org/abs/2205.02937v1
- Date: Tue, 3 May 2022 18:33:27 GMT
- Title: Detection of Propaganda Techniques in Visuo-Lingual Metaphor in Memes
- Authors: Sunil Gundapu, Radhika Mamidi
- Abstract summary: The social media revolution has brought a unique phenomenon to social media platforms called Internet memes.
In this paper, we are dealing with propaganda that is often seen in Internet memes in recent times.
To detect propaganda in Internet memes, we propose a multimodal deep learning fusion system.
- Score: 7.538482310185133
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The exponential rise of social media networks has allowed the production,
distribution, and consumption of data at a phenomenal rate. Moreover, the
social media revolution has brought a unique phenomenon to social media
platforms called Internet memes. Internet memes are one of the most popular
contents used on social media, and they can be in the form of images with a
witty, catchy, or satirical text description. In this paper, we are dealing
with propaganda that is often seen in Internet memes in recent times.
Propaganda is communication, which frequently includes psychological and
rhetorical techniques to manipulate or influence an audience to act or respond
as the propagandist wants. To detect propaganda in Internet memes, we propose a
multimodal deep learning fusion system that fuses the text and image feature
representations and outperforms individual models based solely on either text
or image modalities.
Related papers
- XMeCap: Meme Caption Generation with Sub-Image Adaptability [53.2509590113364]
Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines.
We introduce the textscXMeCap framework, which adopts supervised fine-tuning and reinforcement learning.
textscXMeCap achieves an average evaluation score of 75.85 for single-image memes and 66.32 for multi-image memes, outperforming the best baseline by 3.71% and 4.82%, respectively.
arXiv Detail & Related papers (2024-07-24T10:51:46Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - Contextualizing Internet Memes Across Social Media Platforms [8.22187358555391]
We investigate whether internet memes can be contextualized by using a semantic repository of knowledge, namely, a knowledge graph.
We collect thousands of potential internet meme posts from two social media platforms, namely Reddit and Discord, and develop an extract-transform-load procedure to create a data lake with candidate meme posts.
By using vision transformer-based similarity, we match these candidates against the memes cataloged in IMKG -- a recently released knowledge graph of internet memes.
arXiv Detail & Related papers (2023-11-18T20:18:18Z) - A Template Is All You Meme [83.05919383106715]
We release a knowledge base of memes and information found on www.knowyourmeme.com, composed of more than 54,000 images.
We hypothesize that meme templates can be used to inject models with the context missing from previous approaches.
arXiv Detail & Related papers (2023-11-11T19:38:14Z) - The Face of Populism: Examining Differences in Facial Emotional
Expressions of Political Leaders Using Machine Learning [57.70351255180495]
We apply a deep-learning-based computer-vision algorithm to a sample of 220 YouTube videos depicting political leaders from 15 different countries.
We observe statistically significant differences in the average score of expressed negative emotions between groups of leaders with varying degrees of populist rhetoric.
arXiv Detail & Related papers (2023-04-19T18:32:49Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Feels Bad Man: Dissecting Automated Hateful Meme Detection Through the
Lens of Facebook's Challenge [10.775419935941008]
We assess the efficacy of current state-of-the-art multimodal machine learning models toward hateful meme detection.
We use two benchmark datasets comprising 12,140 and 10,567 images from 4chan's "Politically Incorrect" board (/pol/) and Facebook's Hateful Memes Challenge dataset.
We conduct three experiments to determine the importance of multimodality on classification performance, the influential capacity of fringe Web communities on mainstream social platforms and vice versa.
arXiv Detail & Related papers (2022-02-17T07:52:22Z) - Do Images really do the Talking? Analysing the significance of Images in
Tamil Troll meme classification [0.16863755729554888]
We try to explore the significance of visual features of images in classifying memes.
We try to incorporate the memes as troll and non-trolling memes based on the images and the text on them.
arXiv Detail & Related papers (2021-08-09T09:04:42Z) - Detecting Propaganda Techniques in Memes [32.209606526323945]
We propose a new multi-label multimodal task: detecting the type of propaganda techniques used in memes.
We create and release a new corpus of 950 memes, carefully annotated with 22 propaganda techniques, which can appear in the text, in the image, or in both.
Our analysis of the corpus shows that understanding both modalities together is essential for detecting these techniques.
arXiv Detail & Related papers (2021-08-07T11:56:52Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Automatic Discovery of Political Meme Genres with Diverse Appearances [7.3228874258537875]
We introduce a scalable automated visual recognition pipeline for discovering political meme genres of diverse appearance.
This pipeline can ingest meme images from a social network, apply computer vision-based techniques to extract local features, and then organize the memes into related genres.
Results show that this approach can discover new meme genres with visually diverse images that share common stylistic elements.
arXiv Detail & Related papers (2020-01-17T00:45:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.