Exploring Deep Neural Networks and Transfer Learning for Analyzing
Emotions in Tweets
- URL: http://arxiv.org/abs/2012.06025v1
- Date: Thu, 10 Dec 2020 23:45:53 GMT
- Title: Exploring Deep Neural Networks and Transfer Learning for Analyzing
Emotions in Tweets
- Authors: Yasas Senarath, Uthayasanker Thayasivam
- Abstract summary: We present an experiment on using deep learning and transfer learning techniques for emotion analysis in tweets.
We show in our analysis that the proposed models outperform the state-of-the-art in emotion classification.
- Score: 2.0305676256390934
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we present an experiment on using deep learning and transfer
learning techniques for emotion analysis in tweets and suggest a method to
interpret our deep learning models. The proposed approach for emotion analysis
combines a Long Short Term Memory (LSTM) network with a Convolutional Neural
Network (CNN). Then we extend this approach for emotion intensity prediction
using transfer learning technique. Furthermore, we propose a technique to
visualize the importance of each word in a tweet to get a better understanding
of the model. Experimentally, we show in our analysis that the proposed models
outperform the state-of-the-art in emotion classification while maintaining
competitive results in predicting emotion intensity.
Related papers
- Emotion Recognition Using Transformers with Masked Learning [7.650385662008779]
This study leverages the Vision Transformer (ViT) and Transformer models to focus on the estimation of Valence-Arousal (VA)
This approach transcends traditional Convolutional Neural Networks (CNNs) and Long Short-Term Memory (LSTM) based methods, proposing a new Transformer-based framework.
arXiv Detail & Related papers (2024-03-19T12:26:53Z) - Attention-based Interactive Disentangling Network for Instance-level
Emotional Voice Conversion [81.1492897350032]
Emotional Voice Conversion aims to manipulate a speech according to a given emotion while preserving non-emotion components.
We propose an Attention-based Interactive diseNtangling Network (AINN) that leverages instance-wise emotional knowledge for voice conversion.
arXiv Detail & Related papers (2023-12-29T08:06:45Z) - Implementation of AI Deep Learning Algorithm For Multi-Modal Sentiment
Analysis [0.9065034043031668]
A multi-modal emotion recognition method was established by combining two-channel convolutional neural network with ring network.
The words were vectorized with GloVe, and the word vector was input into the convolutional neural network.
arXiv Detail & Related papers (2023-11-19T05:49:39Z) - Emotion Analysis on EEG Signal Using Machine Learning and Neural Network [0.0]
The main purpose of this study is to improve ways to improve emotion recognition performance using brain signals.
Various approaches to human-machine interaction technologies have been ongoing for a long time, and in recent years, researchers have had great success in automatically understanding emotion using brain signals.
arXiv Detail & Related papers (2023-07-09T09:50:34Z) - Adversarial Attacks on the Interpretation of Neuron Activation
Maximization [70.5472799454224]
Activation-maximization approaches are used to interpret and analyze trained deep-learning models.
In this work, we consider the concept of an adversary manipulating a model for the purpose of deceiving the interpretation.
arXiv Detail & Related papers (2023-06-12T19:54:33Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Interpretability for Multimodal Emotion Recognition using Concept
Activation Vectors [0.0]
We address the issue of interpretability for neural networks in the context of emotion recognition using Concept Activation Vectors (CAVs)
We define human-understandable concepts specific to Emotion AI and map them to the widely-used IEMOCAP multimodal database.
We then evaluate the influence of our proposed concepts at multiple layers of the Bi-directional Contextual LSTM (BC-LSTM) network.
arXiv Detail & Related papers (2022-02-02T15:02:42Z) - SOLVER: Scene-Object Interrelated Visual Emotion Reasoning Network [83.27291945217424]
We propose a novel Scene-Object interreLated Visual Emotion Reasoning network (SOLVER) to predict emotions from images.
To mine the emotional relationships between distinct objects, we first build up an Emotion Graph based on semantic concepts and visual features.
We also design a Scene-Object Fusion Module to integrate scenes and objects, which exploits scene features to guide the fusion process of object features with the proposed scene-based attention mechanism.
arXiv Detail & Related papers (2021-10-24T02:41:41Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - Leveraging Recent Advances in Deep Learning for Audio-Visual Emotion
Recognition [2.1485350418225244]
Spontaneous multi-modal emotion recognition has been extensively studied for human behavior analysis.
We propose a new deep learning-based approach for audio-visual emotion recognition.
arXiv Detail & Related papers (2021-03-16T15:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.