Automatic Discovery of Political Meme Genres with Diverse Appearances
- URL: http://arxiv.org/abs/2001.06122v2
- Date: Thu, 10 Sep 2020 18:10:16 GMT
- Title: Automatic Discovery of Political Meme Genres with Diverse Appearances
- Authors: William Theisen, Joel Brogan, Pamela Bilo Thomas, Daniel Moreira,
Pascal Phoa, Tim Weninger, Walter Scheirer
- Abstract summary: We introduce a scalable automated visual recognition pipeline for discovering political meme genres of diverse appearance.
This pipeline can ingest meme images from a social network, apply computer vision-based techniques to extract local features, and then organize the memes into related genres.
Results show that this approach can discover new meme genres with visually diverse images that share common stylistic elements.
- Score: 7.3228874258537875
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Forms of human communication are not static -- we expect some evolution in
the way information is conveyed over time because of advances in technology.
One example of this phenomenon is the image-based meme, which has emerged as a
dominant form of political messaging in the past decade. While originally used
to spread jokes on social media, memes are now having an outsized impact on
public perception of world events. A significant challenge in automatic meme
analysis has been the development of a strategy to match memes from within a
single genre when the appearances of the images vary. Such variation is
especially common in memes exhibiting mimicry. For example, when voters perform
a common hand gesture to signal their support for a candidate. In this paper we
introduce a scalable automated visual recognition pipeline for discovering
political meme genres of diverse appearance. This pipeline can ingest meme
images from a social network, apply computer vision-based techniques to extract
local features and index new images into a database, and then organize the
memes into related genres. To validate this approach, we perform a large case
study on the 2019 Indonesian Presidential Election using a new dataset of over
two million images collected from Twitter and Instagram. Results show that this
approach can discover new meme genres with visually diverse images that share
common stylistic elements, paving the way forward for further work in semantic
analysis and content attribution.
Related papers
- XMeCap: Meme Caption Generation with Sub-Image Adaptability [53.2509590113364]
Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines.
We introduce the textscXMeCap framework, which adopts supervised fine-tuning and reinforcement learning.
textscXMeCap achieves an average evaluation score of 75.85 for single-image memes and 66.32 for multi-image memes, outperforming the best baseline by 3.71% and 4.82%, respectively.
arXiv Detail & Related papers (2024-07-24T10:51:46Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - Contextualizing Internet Memes Across Social Media Platforms [8.22187358555391]
We investigate whether internet memes can be contextualized by using a semantic repository of knowledge, namely, a knowledge graph.
We collect thousands of potential internet meme posts from two social media platforms, namely Reddit and Discord, and develop an extract-transform-load procedure to create a data lake with candidate meme posts.
By using vision transformer-based similarity, we match these candidates against the memes cataloged in IMKG -- a recently released knowledge graph of internet memes.
arXiv Detail & Related papers (2023-11-18T20:18:18Z) - A Template Is All You Meme [83.05919383106715]
We release a knowledge base of memes and information found on www.knowyourmeme.com, composed of more than 54,000 images.
We hypothesize that meme templates can be used to inject models with the context missing from previous approaches.
arXiv Detail & Related papers (2023-11-11T19:38:14Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Detection of Propaganda Techniques in Visuo-Lingual Metaphor in Memes [7.538482310185133]
The social media revolution has brought a unique phenomenon to social media platforms called Internet memes.
In this paper, we are dealing with propaganda that is often seen in Internet memes in recent times.
To detect propaganda in Internet memes, we propose a multimodal deep learning fusion system.
arXiv Detail & Related papers (2022-05-03T18:33:27Z) - Do Images really do the Talking? Analysing the significance of Images in
Tamil Troll meme classification [0.16863755729554888]
We try to explore the significance of visual features of images in classifying memes.
We try to incorporate the memes as troll and non-trolling memes based on the images and the text on them.
arXiv Detail & Related papers (2021-08-09T09:04:42Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Entropy and complexity unveil the landscape of memes evolution [105.59074436693487]
We study the evolution of 2 million visual memes from Reddit over ten years, from 2011 to 2020.
We find support for the hypothesis that memes are part of an emerging form of internet metalanguage.
arXiv Detail & Related papers (2021-05-26T07:41:09Z) - Multimodal Learning for Hateful Memes Detection [6.6881085567421605]
We propose a novel method that incorporates the image captioning process into the memes detection process.
Our model achieves promising results on the Hateful Memes Detection Challenge.
arXiv Detail & Related papers (2020-11-25T16:49:15Z) - Semantic Search of Memes on Twitter [0.8701566919381222]
This paper proposes and compares several methods for automatically classifying images as memes.
We experimentally evaluate the methods using a large dataset of memes collected from Twitter users in Chile.
arXiv Detail & Related papers (2020-02-04T18:40:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.