DSC IIT-ISM at SemEval-2020 Task 8: Bi-Fusion Techniques for Deep Meme
Emotion Analysis
- URL: http://arxiv.org/abs/2008.00825v1
- Date: Tue, 28 Jul 2020 17:23:35 GMT
- Title: DSC IIT-ISM at SemEval-2020 Task 8: Bi-Fusion Techniques for Deep Meme
Emotion Analysis
- Authors: Pradyumna Gupta, Himanshu Gupta, Aman Sinha
- Abstract summary: This paper presents our work on theMemotion Analysis shared task of SemEval 2020.
We propose a system which uses different bimodal fusion techniques toleverage the inter-modal dependency for sentiment and humor classification tasks.
- Score: 5.259920715958942
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Memes have become an ubiquitous social media entity and the processing and
analysis of suchmultimodal data is currently an active area of research. This
paper presents our work on theMemotion Analysis shared task of SemEval 2020,
which involves the sentiment and humoranalysis of memes. We propose a system
which uses different bimodal fusion techniques toleverage the inter-modal
dependency for sentiment and humor classification tasks. Out of all
ourexperiments, the best system improved the baseline with macro F1 scores of
0.357 on SentimentClassification (Task A), 0.510 on Humor Classification (Task
B) and 0.312 on Scales of SemanticClasses (Task C).
Related papers
- Two in One Go: Single-stage Emotion Recognition with Decoupled Subject-context Transformer [78.35816158511523]
We present a single-stage emotion recognition approach, employing a Decoupled Subject-Context Transformer (DSCT) for simultaneous subject localization and emotion classification.
We evaluate our single-stage framework on two widely used context-aware emotion recognition datasets, CAER-S and EMOTIC.
arXiv Detail & Related papers (2024-04-26T07:30:32Z) - PetKaz at SemEval-2024 Task 3: Advancing Emotion Classification with an LLM for Emotion-Cause Pair Extraction in Conversations [4.463184061618504]
We present our submission to the SemEval-2023 Task3 "The Competition of Multimodal Emotion Cause Analysis in Conversations"
Our approach relies on combining fine-tuned GPT-3.5 for emotion classification and a BiLSTM-based neural network to detect causes.
arXiv Detail & Related papers (2024-04-08T13:25:03Z) - UniSA: Unified Generative Framework for Sentiment Analysis [48.78262926516856]
Sentiment analysis aims to understand people's emotional states and predict emotional categories based on multimodal information.
It consists of several subtasks, such as emotion recognition in conversation (ERC), aspect-based sentiment analysis (ABSA), and multimodal sentiment analysis (MSA)
arXiv Detail & Related papers (2023-09-04T03:49:30Z) - Incorporating Emotions into Health Mention Classification Task on Social
Media [70.23889100356091]
We present a framework for health mention classification that incorporates affective features.
We evaluate our approach on 5 HMC-related datasets from different social media platforms.
Our results indicate that HMC models infused with emotional knowledge are an effective alternative.
arXiv Detail & Related papers (2022-12-09T18:38:41Z) - Set-based Meta-Interpolation for Few-Task Meta-Learning [79.4236527774689]
We propose a novel domain-agnostic task augmentation method, Meta-Interpolation, to densify the meta-training task distribution.
We empirically validate the efficacy of Meta-Interpolation on eight datasets spanning across various domains.
arXiv Detail & Related papers (2022-05-20T06:53:03Z) - Multimodal Analysis of memes for sentiment extraction [0.0]
The study is based on the Memotion dataset, which involves categorising memes based on irony, comedy, motivation, and overall-sentiment.
The best algorithm achieved a macro F1 score of 0.633 for humour classification, 0.55 for motivation classification, 0.61 for sarcasm classification, and 0.575 for overall sentiment of the meme.
arXiv Detail & Related papers (2021-12-22T12:57:05Z) - Transfer Meta-Learning: Information-Theoretic Bounds and Information
Meta-Risk Minimization [47.7605527786164]
Meta-learning automatically infers an inductive bias by observing data from a number of related tasks.
We introduce the problem of transfer meta-learning, in which tasks are drawn from a target task environment during meta-testing.
arXiv Detail & Related papers (2020-11-04T12:55:43Z) - Dif-MAML: Decentralized Multi-Agent Meta-Learning [54.39661018886268]
We propose a cooperative multi-agent meta-learning algorithm, referred to as MAML or Dif-MAML.
We show that the proposed strategy allows a collection of agents to attain agreement at a linear rate and to converge to a stationary point of the aggregate MAML.
Simulation results illustrate the theoretical findings and the superior performance relative to the traditional non-cooperative setting.
arXiv Detail & Related papers (2020-10-06T16:51:09Z) - UPB at SemEval-2020 Task 8: Joint Textual and Visual Modeling in a
Multi-Task Learning Architecture for Memotion Analysis [1.2233362977312945]
We describe the system developed by our team for SemEval-2020 Task 8: Memotion Analysis.
We introduce a novel system to analyze these posts, a multimodal multi-task learning architecture that combines ALBERT for text encoding with VGG-16 for image representation.
Our approach achieves good performance on each of the three subtasks of the current competition, ranking 11th for Subtask A (0.3453 macro F1-score), 1st for Subtask B (0.5183 macro F1-score), and 3rd for Subtask C (0.3171 macro F1-score)
arXiv Detail & Related papers (2020-09-06T17:17:41Z) - SemEval-2020 Task 8: Memotion Analysis -- The Visuo-Lingual Metaphor! [20.55903557920223]
The objective of this proposal is to bring the attention of the research community towards the automatic processing of Internet memes.
The task Memotion analysis released approx 10K annotated memes, with human-annotated labels namely sentiment (positive, negative, neutral), type of emotion (sarcastic, funny, offensive, motivation) and corresponding intensity.
The challenge consisted of three subtasks: sentiment (positive, negative, and neutral) analysis of memes, overall emotion (humour, sarcasm, offensive, and motivational) classification of memes, and classifying intensity of meme emotion.
arXiv Detail & Related papers (2020-08-09T18:17:33Z) - IITK at SemEval-2020 Task 8: Unimodal and Bimodal Sentiment Analysis of
Internet Memes [2.2385755093672044]
We present our approaches for the Memotion Analysis problem as posed in SemEval-2020 Task 8.
The goal of this task is to classify memes based on their emotional content and sentiment.
Our results show that a text-only approach, a simple Feed Forward Neural Network (FFNN) with Word2vec embeddings as input, performs superior to all the others.
arXiv Detail & Related papers (2020-07-21T14:06:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.