Dank or Not? -- Analyzing and Predicting the Popularity of Memes on
Reddit
- URL: http://arxiv.org/abs/2011.14326v2
- Date: Fri, 22 Jan 2021 08:31:42 GMT
- Title: Dank or Not? -- Analyzing and Predicting the Popularity of Memes on
Reddit
- Authors: Kate Barnes, Tiernon Riesenmy, Minh Duc Trinh, Eli Lleshi, N\'ora
Balogh, Roland Molontay
- Abstract summary: We analyze the data of 129,326 memes collected from Reddit in the middle of March, 2020.
We find that the success of a meme can be predicted based on its content alone moderately well.
We also find that both image related and textual attributes have significant incremental predictive power over each other.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Internet memes have become an increasingly pervasive form of contemporary
social communication that attracted a lot of research interest recently. In
this paper, we analyze the data of 129,326 memes collected from Reddit in the
middle of March, 2020, when the most serious coronavirus restrictions were
being introduced around the world. This article not only provides a looking
glass into the thoughts of Internet users during the COVID-19 pandemic but we
also perform a content-based predictive analysis of what makes a meme go viral.
Using machine learning methods, we also study what incremental predictive power
image related attributes have over textual attributes on meme popularity. We
find that the success of a meme can be predicted based on its content alone
moderately well, our best performing machine learning model predicts viral
memes with AUC=0.68. We also find that both image related and textual
attributes have significant incremental predictive power over each other.
Related papers
- XMeCap: Meme Caption Generation with Sub-Image Adaptability [53.2509590113364]
Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines.
We introduce the textscXMeCap framework, which adopts supervised fine-tuning and reinforcement learning.
textscXMeCap achieves an average evaluation score of 75.85 for single-image memes and 66.32 for multi-image memes, outperforming the best baseline by 3.71% and 4.82%, respectively.
arXiv Detail & Related papers (2024-07-24T10:51:46Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - Contextualizing Internet Memes Across Social Media Platforms [8.22187358555391]
We investigate whether internet memes can be contextualized by using a semantic repository of knowledge, namely, a knowledge graph.
We collect thousands of potential internet meme posts from two social media platforms, namely Reddit and Discord, and develop an extract-transform-load procedure to create a data lake with candidate meme posts.
By using vision transformer-based similarity, we match these candidates against the memes cataloged in IMKG -- a recently released knowledge graph of internet memes.
arXiv Detail & Related papers (2023-11-18T20:18:18Z) - A Template Is All You Meme [83.05919383106715]
We release a knowledge base of memes and information found on www.knowyourmeme.com, composed of more than 54,000 images.
We hypothesize that meme templates can be used to inject models with the context missing from previous approaches.
arXiv Detail & Related papers (2023-11-11T19:38:14Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Feels Bad Man: Dissecting Automated Hateful Meme Detection Through the
Lens of Facebook's Challenge [10.775419935941008]
We assess the efficacy of current state-of-the-art multimodal machine learning models toward hateful meme detection.
We use two benchmark datasets comprising 12,140 and 10,567 images from 4chan's "Politically Incorrect" board (/pol/) and Facebook's Hateful Memes Challenge dataset.
We conduct three experiments to determine the importance of multimodality on classification performance, the influential capacity of fringe Web communities on mainstream social platforms and vice versa.
arXiv Detail & Related papers (2022-02-17T07:52:22Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Entropy and complexity unveil the landscape of memes evolution [105.59074436693487]
We study the evolution of 2 million visual memes from Reddit over ten years, from 2011 to 2020.
We find support for the hypothesis that memes are part of an emerging form of internet metalanguage.
arXiv Detail & Related papers (2021-05-26T07:41:09Z) - Dissecting the Meme Magic: Understanding Indicators of Virality in Image
Memes [11.491215688828518]
We find that highly viral memes are more likely to use a close-up scale, contain characters, and include positive or negative emotions.
On the other hand, image memes that do not present a clear subject the viewer can focus attention on, or that include long text are not likely to be re-shared by users.
We train machine learning models to distinguish between image memes that are likely to go viral and those that are unlikely to be re-shared.
arXiv Detail & Related papers (2021-01-16T22:36:51Z) - SemEval-2020 Task 8: Memotion Analysis -- The Visuo-Lingual Metaphor! [20.55903557920223]
The objective of this proposal is to bring the attention of the research community towards the automatic processing of Internet memes.
The task Memotion analysis released approx 10K annotated memes, with human-annotated labels namely sentiment (positive, negative, neutral), type of emotion (sarcastic, funny, offensive, motivation) and corresponding intensity.
The challenge consisted of three subtasks: sentiment (positive, negative, and neutral) analysis of memes, overall emotion (humour, sarcasm, offensive, and motivational) classification of memes, and classifying intensity of meme emotion.
arXiv Detail & Related papers (2020-08-09T18:17:33Z) - Automatic Discovery of Political Meme Genres with Diverse Appearances [7.3228874258537875]
We introduce a scalable automated visual recognition pipeline for discovering political meme genres of diverse appearance.
This pipeline can ingest meme images from a social network, apply computer vision-based techniques to extract local features, and then organize the memes into related genres.
Results show that this approach can discover new meme genres with visually diverse images that share common stylistic elements.
arXiv Detail & Related papers (2020-01-17T00:45:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.