Overview of Memotion 3: Sentiment and Emotion Analysis of Codemixed
Hinglish Memes
- URL: http://arxiv.org/abs/2309.06517v1
- Date: Tue, 12 Sep 2023 18:47:29 GMT
- Title: Overview of Memotion 3: Sentiment and Emotion Analysis of Codemixed
Hinglish Memes
- Authors: Shreyash Mishra, S Suryavardan, Megha Chakraborty, Parth Patwa, Anku
Rani, Aman Chadha, Aishwarya Reganti, Amitava Das, Amit Sheth, Manoj
Chinnakotla, Asif Ekbal and Srijan Kumar
- Abstract summary: We present the overview of the Memotion 3 shared task, as part of the DeFactify 2 workshop at AAAI-23.
The task released an annotated dataset of Hindi-English code-mixed memes based on their Sentiment (Task A), Emotion (Task B), and Emotion intensity (Task C)
Over 50 teams registered for the shared task and 5 made final submissions to the test set of the Memotion 3 dataset.
- Score: 36.34201719103715
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Analyzing memes on the internet has emerged as a crucial endeavor due to the
impact this multi-modal form of content wields in shaping online discourse.
Memes have become a powerful tool for expressing emotions and sentiments,
possibly even spreading hate and misinformation, through humor and sarcasm. In
this paper, we present the overview of the Memotion 3 shared task, as part of
the DeFactify 2 workshop at AAAI-23. The task released an annotated dataset of
Hindi-English code-mixed memes based on their Sentiment (Task A), Emotion (Task
B), and Emotion intensity (Task C). Each of these is defined as an individual
task and the participants are ranked separately for each task. Over 50 teams
registered for the shared task and 5 made final submissions to the test set of
the Memotion 3 dataset. CLIP, BERT modifications, ViT etc. were the most
popular models among the participants along with approaches such as
Student-Teacher model, Fusion, and Ensembling. The best final F1 score for Task
A is 34.41, Task B is 79.77 and Task C is 59.82.
Related papers
- Emotion-LLaMA: Multimodal Emotion Recognition and Reasoning with Instruction Tuning [55.127202990679976]
We introduce the MERR dataset, containing 28,618 coarse-grained and 4,487 fine-grained annotated samples across diverse emotional categories.
This dataset enables models to learn from varied scenarios and generalize to real-world applications.
We propose Emotion-LLaMA, a model that seamlessly integrates audio, visual, and textual inputs through emotion-specific encoders.
arXiv Detail & Related papers (2024-06-17T03:01:22Z) - SemEval-2024 Task 3: Multimodal Emotion Cause Analysis in Conversations [53.60993109543582]
SemEval-2024 Task 3, named Multimodal Emotion Cause Analysis in Conversations, aims at extracting all pairs of emotions and their corresponding causes from conversations.
Under different modality settings, it consists of two subtasks: Textual Emotion-Cause Pair Extraction in Conversations (TECPE) and Multimodal Emotion-Cause Pair Extraction in Conversations (MECPE)
In this paper, we introduce the task, dataset and evaluation settings, summarize the systems of the top teams, and discuss the findings of the participants.
arXiv Detail & Related papers (2024-05-19T09:59:00Z) - SemEval 2024 -- Task 10: Emotion Discovery and Reasoning its Flip in
Conversation (EDiReF) [61.49972925493912]
SemEval-2024 Task 10 is a shared task centred on identifying emotions in code-mixed dialogues.
This task comprises three distinct subtasks - emotion recognition in conversation for code-mixed dialogues, emotion flip reasoning for code-mixed dialogues, and emotion flip reasoning for English dialogues.
A total of 84 participants engaged in this task, with the most adept systems attaining F1-scores of 0.70, 0.79, and 0.76 for the respective subtasks.
arXiv Detail & Related papers (2024-02-29T08:20:06Z) - Multitask Multimodal Prompted Training for Interactive Embodied Task
Completion [48.69347134411864]
Embodied MultiModal Agent (EMMA) is a unified encoder-decoder model that reasons over images and trajectories.
By unifying all tasks as text generation, EMMA learns a language of actions which facilitates transfer across tasks.
arXiv Detail & Related papers (2023-11-07T15:27:52Z) - NYCU-TWO at Memotion 3: Good Foundation, Good Teacher, then you have
Good Meme Analysis [4.361904115604854]
This paper presents a robust solution to the Memotion 3.0 Shared Task.
The goal of this task is to classify the emotion and the corresponding intensity expressed by memes.
Understanding the multi-modal features of the given memes will be the key to solving the task.
arXiv Detail & Related papers (2023-02-13T03:25:37Z) - Optimize_Prime@DravidianLangTech-ACL2022: Emotion Analysis in Tamil [1.0066310107046081]
This paper aims to perform an emotion analysis of social media comments in Tamil.
The task aimed to classify social media comments into categories of emotion like Joy, Anger, Trust, Disgust, etc.
arXiv Detail & Related papers (2022-04-19T18:47:18Z) - A Novel Multi-Task Learning Method for Symbolic Music Emotion
Recognition [76.65908232134203]
Symbolic Music Emotion Recognition(SMER) is to predict music emotion from symbolic data, such as MIDI and MusicXML.
In this paper, we present a simple multi-task framework for SMER, which incorporates the emotion recognition task with other emotion-related auxiliary tasks.
arXiv Detail & Related papers (2022-01-15T07:45:10Z) - Exercise? I thought you said 'Extra Fries': Leveraging Sentence
Demarcations and Multi-hop Attention for Meme Affect Analysis [18.23523076710257]
We propose a multi-hop attention-based deep neural network framework, called MHA-MEME.
Its prime objective is to leverage the spatial-domain correspondence between the visual modality (an image) and various textual segments to extract fine-grained feature representations for classification.
We evaluate MHA-MEME on the 'Memotion Analysis' dataset for all three sub-tasks - sentiment classification, affect classification, and affect class quantification.
arXiv Detail & Related papers (2021-03-23T08:21:37Z) - DSC IIT-ISM at SemEval-2020 Task 8: Bi-Fusion Techniques for Deep Meme
Emotion Analysis [5.259920715958942]
This paper presents our work on theMemotion Analysis shared task of SemEval 2020.
We propose a system which uses different bimodal fusion techniques toleverage the inter-modal dependency for sentiment and humor classification tasks.
arXiv Detail & Related papers (2020-07-28T17:23:35Z) - IITK at SemEval-2020 Task 8: Unimodal and Bimodal Sentiment Analysis of
Internet Memes [2.2385755093672044]
We present our approaches for the Memotion Analysis problem as posed in SemEval-2020 Task 8.
The goal of this task is to classify memes based on their emotional content and sentiment.
Our results show that a text-only approach, a simple Feed Forward Neural Network (FFNN) with Word2vec embeddings as input, performs superior to all the others.
arXiv Detail & Related papers (2020-07-21T14:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.