Multi-modal application: Image Memes Generation
- URL: http://arxiv.org/abs/2112.01651v1
- Date: Fri, 3 Dec 2021 00:17:44 GMT
- Title: Multi-modal application: Image Memes Generation
- Authors: Zhiyuan Liu, Chuanzheng Sun, Yuxin Jiang, Shiqi Jiang, Mei Ming
- Abstract summary: We propose an end-to-end encoder-decoder architecture meme generator.
An Internet meme commonly takes the form of an image and is created by combining a meme template (image) and a caption (natural language sentence)
- Score: 13.043370069398916
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Meme is an interesting word. Internet memes offer unique insights into the
changes in our perception of the world, the media and our own lives. If you
surf the Internet for long enough, you will see it somewhere on the Internet.
With the rise of social media platforms and convenient image dissemination,
Image Meme has gained fame. Image memes have become a kind of pop culture and
they play an important role in communication over social media, blogs, and open
messages. With the development of artificial intelligence and the widespread
use of deep learning, Natural Language Processing (NLP) and Computer Vision
(CV) can also be used to solve more problems in life, including meme
generation. An Internet meme commonly takes the form of an image and is created
by combining a meme template (image) and a caption (natural language sentence).
In our project, we propose an end-to-end encoder-decoder architecture meme
generator. For a given input sentence, we use the Meme template selection model
to determine the emotion it expresses and select the image template. Then
generate captions and memes through to the meme caption generator. Code and
models are available at github
Related papers
- XMeCap: Meme Caption Generation with Sub-Image Adaptability [53.2509590113364]
Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines.
We introduce the textscXMeCap framework, which adopts supervised fine-tuning and reinforcement learning.
textscXMeCap achieves an average evaluation score of 75.85 for single-image memes and 66.32 for multi-image memes, outperforming the best baseline by 3.71% and 4.82%, respectively.
arXiv Detail & Related papers (2024-07-24T10:51:46Z) - What Makes a Meme a Meme? Identifying Memes for Memetics-Aware Dataset Creation [0.9217021281095907]
Multimodal Internet Memes are now a ubiquitous fixture in online discourse.
Memetics are the process by which memes are imitated and transformed into symbols.
We develop a meme identification protocol which distinguishes meme from non-memetic content by recognising the memetics within it.
arXiv Detail & Related papers (2024-07-16T15:48:36Z) - MemeCraft: Contextual and Stance-Driven Multimodal Meme Generation [9.048389283002294]
We introduce MemeCraft, an innovative meme generator that leverages large language models (LLMs) and visual language models (VLMs) to produce memes advocating specific social movements.
MemeCraft presents an end-to-end pipeline, transforming user prompts into compelling multimodal memes without manual intervention.
arXiv Detail & Related papers (2024-02-24T06:14:34Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - Contextualizing Internet Memes Across Social Media Platforms [8.22187358555391]
We investigate whether internet memes can be contextualized by using a semantic repository of knowledge, namely, a knowledge graph.
We collect thousands of potential internet meme posts from two social media platforms, namely Reddit and Discord, and develop an extract-transform-load procedure to create a data lake with candidate meme posts.
By using vision transformer-based similarity, we match these candidates against the memes cataloged in IMKG -- a recently released knowledge graph of internet memes.
arXiv Detail & Related papers (2023-11-18T20:18:18Z) - A Template Is All You Meme [83.05919383106715]
We release a knowledge base of memes and information found on www.knowyourmeme.com, composed of more than 54,000 images.
We hypothesize that meme templates can be used to inject models with the context missing from previous approaches.
arXiv Detail & Related papers (2023-11-11T19:38:14Z) - MemeCap: A Dataset for Captioning and Interpreting Memes [11.188548484391978]
We present the task of meme captioning and release a new dataset, MemeCap.
Our dataset contains 6.3K memes along with the title of the post containing the meme, the meme captions, the literal image caption, and the visual metaphors.
arXiv Detail & Related papers (2023-05-23T05:41:18Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Entropy and complexity unveil the landscape of memes evolution [105.59074436693487]
We study the evolution of 2 million visual memes from Reddit over ten years, from 2011 to 2020.
We find support for the hypothesis that memes are part of an emerging form of internet metalanguage.
arXiv Detail & Related papers (2021-05-26T07:41:09Z) - memeBot: Towards Automatic Image Meme Generation [24.37035046107127]
The model learns the dependencies between the meme captions and the meme template images and generates new memes.
Experiments on Twitter data show the efficacy of the model in generating memes for sentences in online social interaction.
arXiv Detail & Related papers (2020-04-30T03:48:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.