Semantic Search of Memes on Twitter
- URL: http://arxiv.org/abs/2002.01462v4
- Date: Wed, 20 May 2020 23:44:26 GMT
- Title: Semantic Search of Memes on Twitter
- Authors: Jesus Perez-Martin, Benjamin Bustos, Magdalena Saldana
- Abstract summary: This paper proposes and compares several methods for automatically classifying images as memes.
We experimentally evaluate the methods using a large dataset of memes collected from Twitter users in Chile.
- Score: 0.8701566919381222
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Memes are becoming a useful source of data for analyzing behavior on social
media. However, a problem to tackle is how to correctly identify a meme. As the
number of memes published every day on social media is huge, there is a need
for automatic methods for classifying and searching in large meme datasets.
This paper proposes and compares several methods for automatically classifying
images as memes. Also, we propose a method that allows us to implement a system
for retrieving memes from a dataset using a textual query. We experimentally
evaluate the methods using a large dataset of memes collected from Twitter
users in Chile, which was annotated by a group of experts. Though some of the
evaluated methods are effective, there is still room for improvement.
Related papers
- Decoding Memes: A Comparative Study of Machine Learning Models for Template Identification [0.0]
"meme template" is a layout or format that is used to create memes.
Despite extensive research on meme virality, the task of automatically identifying meme templates remains a challenge.
This paper presents a comprehensive comparison and evaluation of existing meme template identification methods.
arXiv Detail & Related papers (2024-08-15T12:52:06Z) - XMeCap: Meme Caption Generation with Sub-Image Adaptability [53.2509590113364]
Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines.
We introduce the textscXMeCap framework, which adopts supervised fine-tuning and reinforcement learning.
textscXMeCap achieves an average evaluation score of 75.85 for single-image memes and 66.32 for multi-image memes, outperforming the best baseline by 3.71% and 4.82%, respectively.
arXiv Detail & Related papers (2024-07-24T10:51:46Z) - What Makes a Meme a Meme? Identifying Memes for Memetics-Aware Dataset Creation [0.9217021281095907]
Multimodal Internet Memes are now a ubiquitous fixture in online discourse.
Memetics are the process by which memes are imitated and transformed into symbols.
We develop a meme identification protocol which distinguishes meme from non-memetic content by recognising the memetics within it.
arXiv Detail & Related papers (2024-07-16T15:48:36Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - Contextualizing Internet Memes Across Social Media Platforms [8.22187358555391]
We investigate whether internet memes can be contextualized by using a semantic repository of knowledge, namely, a knowledge graph.
We collect thousands of potential internet meme posts from two social media platforms, namely Reddit and Discord, and develop an extract-transform-load procedure to create a data lake with candidate meme posts.
By using vision transformer-based similarity, we match these candidates against the memes cataloged in IMKG -- a recently released knowledge graph of internet memes.
arXiv Detail & Related papers (2023-11-18T20:18:18Z) - A Template Is All You Meme [83.05919383106715]
We release a knowledge base of memes and information found on www.knowyourmeme.com, composed of more than 54,000 images.
We hypothesize that meme templates can be used to inject models with the context missing from previous approaches.
arXiv Detail & Related papers (2023-11-11T19:38:14Z) - DisinfoMeme: A Multimodal Dataset for Detecting Meme Intentionally
Spreading Out Disinformation [72.18912216025029]
We present DisinfoMeme to help detect disinformation memes.
The dataset contains memes mined from Reddit covering three current topics: the COVID-19 pandemic, the Black Lives Matter movement, and veganism/vegetarianism.
arXiv Detail & Related papers (2022-05-25T09:54:59Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Multimodal Learning for Hateful Memes Detection [6.6881085567421605]
We propose a novel method that incorporates the image captioning process into the memes detection process.
Our model achieves promising results on the Hateful Memes Detection Challenge.
arXiv Detail & Related papers (2020-11-25T16:49:15Z) - Automatic Discovery of Political Meme Genres with Diverse Appearances [7.3228874258537875]
We introduce a scalable automated visual recognition pipeline for discovering political meme genres of diverse appearance.
This pipeline can ingest meme images from a social network, apply computer vision-based techniques to extract local features, and then organize the memes into related genres.
Results show that this approach can discover new meme genres with visually diverse images that share common stylistic elements.
arXiv Detail & Related papers (2020-01-17T00:45:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.