Implementation of AI Deep Learning Algorithm For Multi-Modal Sentiment
Analysis
- URL: http://arxiv.org/abs/2311.11237v1
- Date: Sun, 19 Nov 2023 05:49:39 GMT
- Title: Implementation of AI Deep Learning Algorithm For Multi-Modal Sentiment
Analysis
- Authors: Jiazhen Wang
- Abstract summary: A multi-modal emotion recognition method was established by combining two-channel convolutional neural network with ring network.
The words were vectorized with GloVe, and the word vector was input into the convolutional neural network.
- Score: 0.9065034043031668
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A multi-modal emotion recognition method was established by combining
two-channel convolutional neural network with ring network. This method can
extract emotional information effectively and improve learning efficiency. The
words were vectorized with GloVe, and the word vector was input into the
convolutional neural network. Combining attention mechanism and maximum pool
converter BiSRU channel, the local deep emotion and pre-post sequential emotion
semantics are obtained. Finally, multiple features are fused and input as the
polarity of emotion, so as to achieve the emotion analysis of the target.
Experiments show that the emotion analysis method based on feature fusion can
effectively improve the recognition accuracy of emotion data set and reduce the
learning time. The model has a certain generalization.
Related papers
- Attention-based Interactive Disentangling Network for Instance-level
Emotional Voice Conversion [81.1492897350032]
Emotional Voice Conversion aims to manipulate a speech according to a given emotion while preserving non-emotion components.
We propose an Attention-based Interactive diseNtangling Network (AINN) that leverages instance-wise emotional knowledge for voice conversion.
arXiv Detail & Related papers (2023-12-29T08:06:45Z) - EmoDiarize: Speaker Diarization and Emotion Identification from Speech
Signals using Convolutional Neural Networks [0.0]
This research explores the integration of deep learning techniques in speech emotion recognition.
It introduces a framework that combines a pre-existing speaker diarization pipeline and an emotion identification model built on a Convolutional Neural Network (CNN)
The proposed model yields an unweighted accuracy of 63%, demonstrating remarkable efficiency in accurately identifying emotional states within speech signals.
arXiv Detail & Related papers (2023-10-19T16:02:53Z) - Emotion Analysis on EEG Signal Using Machine Learning and Neural Network [0.0]
The main purpose of this study is to improve ways to improve emotion recognition performance using brain signals.
Various approaches to human-machine interaction technologies have been ongoing for a long time, and in recent years, researchers have had great success in automatically understanding emotion using brain signals.
arXiv Detail & Related papers (2023-07-09T09:50:34Z) - EMERSK -- Explainable Multimodal Emotion Recognition with Situational
Knowledge [0.0]
We present Explainable Multimodal Emotion Recognition with Situational Knowledge (EMERSK)
EMERSK is a general system for human emotion recognition and explanation using visual information.
Our system can handle multiple modalities, including facial expressions, posture, and gait in a flexible and modular manner.
arXiv Detail & Related papers (2023-06-14T17:52:37Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Stimuli-Aware Visual Emotion Analysis [75.68305830514007]
We propose a stimuli-aware visual emotion analysis (VEA) method consisting of three stages, namely stimuli selection, feature extraction and emotion prediction.
To the best of our knowledge, it is the first time to introduce stimuli selection process into VEA in an end-to-end network.
Experiments demonstrate that the proposed method consistently outperforms the state-of-the-art approaches on four public visual emotion datasets.
arXiv Detail & Related papers (2021-09-04T08:14:52Z) - Emotional EEG Classification using Connectivity Features and
Convolutional Neural Networks [81.74442855155843]
We introduce a new classification system that utilizes brain connectivity with a CNN and validate its effectiveness via the emotional video classification.
The level of concentration of the brain connectivity related to the emotional property of the target video is correlated with classification performance.
arXiv Detail & Related papers (2021-01-18T13:28:08Z) - Exploring Deep Neural Networks and Transfer Learning for Analyzing
Emotions in Tweets [2.0305676256390934]
We present an experiment on using deep learning and transfer learning techniques for emotion analysis in tweets.
We show in our analysis that the proposed models outperform the state-of-the-art in emotion classification.
arXiv Detail & Related papers (2020-12-10T23:45:53Z) - An End-to-End Visual-Audio Attention Network for Emotion Recognition in
User-Generated Videos [64.91614454412257]
We propose to recognize video emotions in an end-to-end manner based on convolutional neural networks (CNNs)
Specifically, we develop a deep Visual-Audio Attention Network (VAANet), a novel architecture that integrates spatial, channel-wise, and temporal attentions into a visual 3D CNN and temporal attentions into an audio 2D CNN.
arXiv Detail & Related papers (2020-02-12T15:33:59Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.