Leveraging Sentiment Analysis Knowledge to Solve Emotion Detection Tasks
- URL: http://arxiv.org/abs/2111.03715v1
- Date: Fri, 5 Nov 2021 20:06:58 GMT
- Title: Leveraging Sentiment Analysis Knowledge to Solve Emotion Detection Tasks
- Authors: Maude Nguyen-The, Guillaume-Alexandre Bilodeau and Jan Rockemann
- Abstract summary: We present a Transformer-based model with a Fusion of Adapter layers to improve the emotion detection task on large scale dataset.
We obtained state-of-the-art results for emotion recognition on CMU-MOSEI even while using only the textual modality.
- Score: 11.928873764689458
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Identifying and understanding underlying sentiment or emotions in text is a
key component of multiple natural language processing applications. While
simple polarity sentiment analysis is a well-studied subject, fewer advances
have been made in identifying more complex, finer-grained emotions using only
textual data. In this paper, we present a Transformer-based model with a Fusion
of Adapter layers which leverages knowledge from more simple sentiment analysis
tasks to improve the emotion detection task on large scale dataset, such as
CMU-MOSEI, using the textual modality only. Results show that our proposed
method is competitive with other approaches. We obtained state-of-the-art
results for emotion recognition on CMU-MOSEI even while using only the textual
modality.
Related papers
- Large Language Models Meet Text-Centric Multimodal Sentiment Analysis: A Survey [66.166184609616]
ChatGPT has opened up immense potential for applying large language models (LLMs) to text-centric multimodal tasks.
It is still unclear how existing LLMs can adapt better to text-centric multimodal sentiment analysis tasks.
arXiv Detail & Related papers (2024-06-12T10:36:27Z) - Self-supervised Gait-based Emotion Representation Learning from Selective Strongly Augmented Skeleton Sequences [4.740624855896404]
We propose a contrastive learning framework utilizing selective strong augmentation for self-supervised gait-based emotion representation.
Our approach is validated on the Emotion-Gait (E-Gait) and Emilya datasets and outperforms the state-of-the-art methods under different evaluation protocols.
arXiv Detail & Related papers (2024-05-08T09:13:10Z) - Two in One Go: Single-stage Emotion Recognition with Decoupled Subject-context Transformer [78.35816158511523]
We present a single-stage emotion recognition approach, employing a Decoupled Subject-Context Transformer (DSCT) for simultaneous subject localization and emotion classification.
We evaluate our single-stage framework on two widely used context-aware emotion recognition datasets, CAER-S and EMOTIC.
arXiv Detail & Related papers (2024-04-26T07:30:32Z) - VLLMs Provide Better Context for Emotion Understanding Through Common Sense Reasoning [66.23296689828152]
We leverage the capabilities of Vision-and-Large-Language Models to enhance in-context emotion classification.
In the first stage, we propose prompting VLLMs to generate descriptions in natural language of the subject's apparent emotion.
In the second stage, the descriptions are used as contextual information and, along with the image input, are used to train a transformer-based architecture.
arXiv Detail & Related papers (2024-04-10T15:09:15Z) - Implementation of AI Deep Learning Algorithm For Multi-Modal Sentiment
Analysis [0.9065034043031668]
A multi-modal emotion recognition method was established by combining two-channel convolutional neural network with ring network.
The words were vectorized with GloVe, and the word vector was input into the convolutional neural network.
arXiv Detail & Related papers (2023-11-19T05:49:39Z) - An Empirical Study and Improvement for Speech Emotion Recognition [22.250228893114066]
Multimodal speech emotion recognition aims to detect speakers' emotions from audio and text.
In this work, we consider a simple yet important problem: how to fuse audio and text modality information.
Empirical results show our method obtained new state-of-the-art results on the IEMOCAP dataset.
arXiv Detail & Related papers (2023-04-08T03:24:06Z) - REDAffectiveLM: Leveraging Affect Enriched Embedding and
Transformer-based Neural Language Model for Readers' Emotion Detection [3.6678641723285446]
We propose a novel approach for Readers' Emotion Detection from short-text documents using a deep learning model called REDAffectiveLM.
We leverage context-specific and affect enriched representations by using a transformer-based pre-trained language model in tandem with affect enriched Bi-LSTM+Attention.
arXiv Detail & Related papers (2023-01-21T19:28:25Z) - Holistic Visual-Textual Sentiment Analysis with Prior Models [64.48229009396186]
We propose a holistic method that achieves robust visual-textual sentiment analysis.
The proposed method consists of four parts: (1) a visual-textual branch to learn features directly from data for sentiment analysis, (2) a visual expert branch with a set of pre-trained "expert" encoders to extract selected semantic visual features, (3) a CLIP branch to implicitly model visual-textual correspondence, and (4) a multimodal feature fusion network based on BERT to fuse multimodal features and make sentiment predictions.
arXiv Detail & Related papers (2022-11-23T14:40:51Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - EmoDNN: Understanding emotions from short texts through a deep neural
network ensemble [2.459874436804819]
We propose a framework that infers latent individual aspects from brief contents.
We also present a novel ensemble classifier equipped with dynamic dropout convnets to extract emotions from textual context.
Our proposed model can achieve a higher performance in recognizing emotion from noisy contents.
arXiv Detail & Related papers (2021-06-03T09:17:34Z) - A Deep Neural Framework for Contextual Affect Detection [51.378225388679425]
A short and simple text carrying no emotion can represent some strong emotions when reading along with its context.
We propose a Contextual Affect Detection framework which learns the inter-dependence of words in a sentence.
arXiv Detail & Related papers (2020-01-28T05:03:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.