Mapping Visual Themes among Authentic and Coordinated Memes
- URL: http://arxiv.org/abs/2206.02306v1
- Date: Mon, 6 Jun 2022 01:26:39 GMT
- Title: Mapping Visual Themes among Authentic and Coordinated Memes
- Authors: Keng-Chi Chang
- Abstract summary: I learn low-dimensional visual embeddings of memes and apply K-means to jointly cluster authentic and coordinated memes.
I find that authentic and coordinated memes share a large fraction of visual themes but with varying degrees.
A simple logistic regression on the low-dimensional embeddings can discern IRA memes from Reddit memes with an out-sample testing accuracy of 0.84.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: What distinguishes authentic memes from those created by state actors? I
utilize a self-supervised vision model, DeepCluster (Caron et al. 2019), to
learn low-dimensional visual embeddings of memes and apply K-means to jointly
cluster authentic and coordinated memes without additional inputs. I find that
authentic and coordinated memes share a large fraction of visual themes but
with varying degrees. Coordinated memes from Russian IRA accounts promote more
themes around celebrities, quotes, screenshots, military, and gender. Authentic
Reddit memes include more themes with comics and movie characters. A simple
logistic regression on the low-dimensional embeddings can discern IRA memes
from Reddit memes with an out-sample testing accuracy of 0.84.
Related papers
- XMeCap: Meme Caption Generation with Sub-Image Adaptability [53.2509590113364]
Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines.
We introduce the textscXMeCap framework, which adopts supervised fine-tuning and reinforcement learning.
textscXMeCap achieves an average evaluation score of 75.85 for single-image memes and 66.32 for multi-image memes, outperforming the best baseline by 3.71% and 4.82%, respectively.
arXiv Detail & Related papers (2024-07-24T10:51:46Z) - What Makes a Meme a Meme? Identifying Memes for Memetics-Aware Dataset Creation [0.9217021281095907]
Multimodal Internet Memes are now a ubiquitous fixture in online discourse.
Memetics are the process by which memes are imitated and transformed into symbols.
We develop a meme identification protocol which distinguishes meme from non-memetic content by recognising the memetics within it.
arXiv Detail & Related papers (2024-07-16T15:48:36Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - A Template Is All You Meme [83.05919383106715]
We release a knowledge base of memes and information found on www.knowyourmeme.com, composed of more than 54,000 images.
We hypothesize that meme templates can be used to inject models with the context missing from previous approaches.
arXiv Detail & Related papers (2023-11-11T19:38:14Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Detecting Harmful Memes and Their Targets [27.25262711136056]
We present HarMeme, the first benchmark dataset, containing 3,544 memes related to COVID-19.
In the first stage, we labeled a meme as very harmful, partially harmful, or harmless; in the second stage, we further annotated the type of target(s) that each harmful meme points to.
The evaluation results using ten unimodal and multimodal models highlight the importance of using multimodal signals for both tasks.
arXiv Detail & Related papers (2021-09-24T17:11:42Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Entropy and complexity unveil the landscape of memes evolution [105.59074436693487]
We study the evolution of 2 million visual memes from Reddit over ten years, from 2011 to 2020.
We find support for the hypothesis that memes are part of an emerging form of internet metalanguage.
arXiv Detail & Related papers (2021-05-26T07:41:09Z) - Dissecting the Meme Magic: Understanding Indicators of Virality in Image
Memes [11.491215688828518]
We find that highly viral memes are more likely to use a close-up scale, contain characters, and include positive or negative emotions.
On the other hand, image memes that do not present a clear subject the viewer can focus attention on, or that include long text are not likely to be re-shared by users.
We train machine learning models to distinguish between image memes that are likely to go viral and those that are unlikely to be re-shared.
arXiv Detail & Related papers (2021-01-16T22:36:51Z) - Multimodal Learning for Hateful Memes Detection [6.6881085567421605]
We propose a novel method that incorporates the image captioning process into the memes detection process.
Our model achieves promising results on the Hateful Memes Detection Challenge.
arXiv Detail & Related papers (2020-11-25T16:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.