Multimodal Analysis of memes for sentiment extraction
- URL: http://arxiv.org/abs/2112.11850v1
- Date: Wed, 22 Dec 2021 12:57:05 GMT
- Title: Multimodal Analysis of memes for sentiment extraction
- Authors: Nayan Varma Alluri, Neeli Dheeraj Krishna
- Abstract summary: The study is based on the Memotion dataset, which involves categorising memes based on irony, comedy, motivation, and overall-sentiment.
The best algorithm achieved a macro F1 score of 0.633 for humour classification, 0.55 for motivation classification, 0.61 for sarcasm classification, and 0.575 for overall sentiment of the meme.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Memes are one of the most ubiquitous forms of social media communication. The
study and processing of memes, which are intrinsically multimedia, is a popular
topic right now. The study presented in this research is based on the Memotion
dataset, which involves categorising memes based on irony, comedy, motivation,
and overall-sentiment. Three separate innovative transformer-based techniques
have been developed, and their outcomes have been thoroughly reviewed.The best
algorithm achieved a macro F1 score of 0.633 for humour classification, 0.55
for motivation classification, 0.61 for sarcasm classification, and 0.575 for
overall sentiment of the meme out of all our techniques.
Related papers
- XMeCap: Meme Caption Generation with Sub-Image Adaptability [53.2509590113364]
Humor, deeply rooted in societal meanings and cultural details, poses a unique challenge for machines.
We introduce the textscXMeCap framework, which adopts supervised fine-tuning and reinforcement learning.
textscXMeCap achieves an average evaluation score of 75.85 for single-image memes and 66.32 for multi-image memes, outperforming the best baseline by 3.71% and 4.82%, respectively.
arXiv Detail & Related papers (2024-07-24T10:51:46Z) - MemeMQA: Multimodal Question Answering for Memes via Rationale-Based Inferencing [53.30190591805432]
We introduce MemeMQA, a multimodal question-answering framework to solicit accurate responses to structured questions.
We also propose ARSENAL, a novel two-stage multimodal framework to address MemeMQA.
arXiv Detail & Related papers (2024-05-18T07:44:41Z) - IITK at SemEval-2024 Task 4: Hierarchical Embeddings for Detection of Persuasion Techniques in Memes [4.679320772294786]
This paper proposes an ensemble of Class Definition Prediction (CDP) and hyperbolic embeddings-based approaches for this task.
We enhance meme classification accuracy and comprehensiveness by integrating HypEmo's hierarchical label embeddings and a multi-task learning framework for emotion prediction.
arXiv Detail & Related papers (2024-04-06T06:28:02Z) - Meme-ingful Analysis: Enhanced Understanding of Cyberbullying in Memes
Through Multimodal Explanations [48.82168723932981]
We introduce em MultiBully-Ex, the first benchmark dataset for multimodal explanation from code-mixed cyberbullying memes.
A Contrastive Language-Image Pretraining (CLIP) approach has been proposed for visual and textual explanation of a meme.
arXiv Detail & Related papers (2024-01-18T11:24:30Z) - A Template Is All You Meme [83.05919383106715]
We release a knowledge base of memes and information found on www.knowyourmeme.com, composed of more than 54,000 images.
We hypothesize that meme templates can be used to inject models with the context missing from previous approaches.
arXiv Detail & Related papers (2023-11-11T19:38:14Z) - Detecting and Understanding Harmful Memes: A Survey [48.135415967633676]
We offer a comprehensive survey with a focus on harmful memes.
One interesting finding is that many types of harmful memes are not really studied, e.g., such featuring self-harm and extremism.
Another observation is that memes can propagate globally through repackaging in different languages and that they can also be multilingual.
arXiv Detail & Related papers (2022-05-09T13:43:27Z) - Memes in the Wild: Assessing the Generalizability of the Hateful Memes
Challenge Dataset [47.65948529524281]
We collect hateful and non-hateful memes from Pinterest to evaluate out-of-sample performance on models pre-trained on the Facebook dataset.
We find that memes in the wild differ in two key aspects: 1) Captions must be extracted via OCR, and 2) Memes are more diverse than traditional memes', including screenshots of conversations or text on a plain background.
arXiv Detail & Related papers (2021-07-09T09:04:05Z) - Exercise? I thought you said 'Extra Fries': Leveraging Sentence
Demarcations and Multi-hop Attention for Meme Affect Analysis [18.23523076710257]
We propose a multi-hop attention-based deep neural network framework, called MHA-MEME.
Its prime objective is to leverage the spatial-domain correspondence between the visual modality (an image) and various textual segments to extract fine-grained feature representations for classification.
We evaluate MHA-MEME on the 'Memotion Analysis' dataset for all three sub-tasks - sentiment classification, affect classification, and affect class quantification.
arXiv Detail & Related papers (2021-03-23T08:21:37Z) - SemEval-2020 Task 8: Memotion Analysis -- The Visuo-Lingual Metaphor! [20.55903557920223]
The objective of this proposal is to bring the attention of the research community towards the automatic processing of Internet memes.
The task Memotion analysis released approx 10K annotated memes, with human-annotated labels namely sentiment (positive, negative, neutral), type of emotion (sarcastic, funny, offensive, motivation) and corresponding intensity.
The challenge consisted of three subtasks: sentiment (positive, negative, and neutral) analysis of memes, overall emotion (humour, sarcasm, offensive, and motivational) classification of memes, and classifying intensity of meme emotion.
arXiv Detail & Related papers (2020-08-09T18:17:33Z) - DSC IIT-ISM at SemEval-2020 Task 8: Bi-Fusion Techniques for Deep Meme
Emotion Analysis [5.259920715958942]
This paper presents our work on theMemotion Analysis shared task of SemEval 2020.
We propose a system which uses different bimodal fusion techniques toleverage the inter-modal dependency for sentiment and humor classification tasks.
arXiv Detail & Related papers (2020-07-28T17:23:35Z) - IITK at SemEval-2020 Task 8: Unimodal and Bimodal Sentiment Analysis of
Internet Memes [2.2385755093672044]
We present our approaches for the Memotion Analysis problem as posed in SemEval-2020 Task 8.
The goal of this task is to classify memes based on their emotional content and sentiment.
Our results show that a text-only approach, a simple Feed Forward Neural Network (FFNN) with Word2vec embeddings as input, performs superior to all the others.
arXiv Detail & Related papers (2020-07-21T14:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.