Team Neuro at SemEval-2020 Task 8: Multi-Modal Fine Grain Emotion
Classification of Memes using Multitask Learning
- URL: http://arxiv.org/abs/2005.10915v1
- Date: Thu, 21 May 2020 21:29:44 GMT
- Title: Team Neuro at SemEval-2020 Task 8: Multi-Modal Fine Grain Emotion
Classification of Memes using Multitask Learning
- Authors: Sourya Dipta Das, Soumil Mandal
- Abstract summary: We describe the system that we used for the memotion analysis challenge, which is Task 8 of SemEval-2020.
This challenge had three subtasks where affect based sentiment classification of the memes was required along with intensities.
The system we proposed combines the three tasks into a single one by representing it as multi-label hierarchical classification problem.
- Score: 7.145975932644256
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this article, we describe the system that we used for the memotion
analysis challenge, which is Task 8 of SemEval-2020. This challenge had three
subtasks where affect based sentiment classification of the memes was required
along with intensities. The system we proposed combines the three tasks into a
single one by representing it as multi-label hierarchical classification
problem.Here,Multi-Task learning or Joint learning Procedure is used to train
our model.We have used dual channels to extract text and image based features
from separate Deep Neural Network Backbone and aggregate them to create task
specific features. These task specific aggregated feature vectors ware then
passed on to smaller networks with dense layers, each one assigned for
predicting one type of fine grain sentiment label. Our Proposed method show the
superiority of this system in few tasks to other best models from the
challenge.
Related papers
- Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks [69.38572074372392]
We present the first results proving that feature learning occurs during training with a nonlinear model on multiple tasks.
Our key insight is that multi-task pretraining induces a pseudo-contrastive loss that favors representations that align points that typically have the same label across tasks.
arXiv Detail & Related papers (2023-07-13T16:39:08Z) - An Ensemble Approach for Multiple Emotion Descriptors Estimation Using
Multi-task Learning [12.589338141771385]
This paper illustrates our submission method to the fourth Affective Behavior Analysis in-the-Wild (ABAW) Competition.
Instead of using only face information, we employ full information from a provided dataset containing face and the context around the face.
The proposed system achieves the performance of 0.917 on the MTL Challenge validation dataset.
arXiv Detail & Related papers (2022-07-22T04:57:56Z) - Codec at SemEval-2022 Task 5: Multi-Modal Multi-Transformer Misogynous
Meme Classification Framework [0.0]
We describe our work towards building a generic framework for both multi-modal embedding and multi-label binary classification tasks.
We are participating in task 5 (Multimedia Automatic Misogyny Identification) of SemEval 2022 competition.
arXiv Detail & Related papers (2022-06-14T22:37:25Z) - Sparsely Activated Mixture-of-Experts are Robust Multi-Task Learners [67.5865966762559]
We study whether sparsely activated Mixture-of-Experts (MoE) improve multi-task learning.
We devise task-aware gating functions to route examples from different tasks to specialized experts.
This results in a sparsely activated multi-task model with a large number of parameters, but with the same computational cost as that of a dense model.
arXiv Detail & Related papers (2022-04-16T00:56:12Z) - Combining Modular Skills in Multitask Learning [149.8001096811708]
A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks.
In this work, we assume each task is associated with a subset of latent discrete skills from a (potentially small) inventory.
We find that the modular design of a network significantly increases sample efficiency in reinforcement learning and few-shot generalisation in supervised learning.
arXiv Detail & Related papers (2022-02-28T16:07:19Z) - Multi-Task Learning with Sequence-Conditioned Transporter Networks [67.57293592529517]
We aim to solve multi-task learning through the lens of sequence-conditioning and weighted sampling.
We propose a new suite of benchmark aimed at compositional tasks, MultiRavens, which allows defining custom task combinations.
Second, we propose a vision-based end-to-end system architecture, Sequence-Conditioned Transporter Networks, which augments Goal-Conditioned Transporter Networks with sequence-conditioning and weighted sampling.
arXiv Detail & Related papers (2021-09-15T21:19:11Z) - Low Resource Multi-Task Sequence Tagging -- Revisiting Dynamic
Conditional Random Fields [67.51177964010967]
We compare different models for low resource multi-task sequence tagging that leverage dependencies between label sequences for different tasks.
We find that explicit modeling of inter-dependencies between task predictions outperforms single-task as well as standard multi-task models.
arXiv Detail & Related papers (2020-05-01T07:11:34Z) - MTI-Net: Multi-Scale Task Interaction Networks for Multi-Task Learning [82.62433731378455]
We show that tasks with high affinity at a certain scale are not guaranteed to retain this behaviour at other scales.
We propose a novel architecture, namely MTI-Net, that builds upon this finding.
arXiv Detail & Related papers (2020-01-19T21:02:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.