Emotion Recognition In Persian Speech Using Deep Neural Networks
- URL: http://arxiv.org/abs/2204.13601v1
- Date: Thu, 28 Apr 2022 16:02:05 GMT
- Title: Emotion Recognition In Persian Speech Using Deep Neural Networks
- Authors: Ali Yazdani, Hossein Simchi, Yaser Shekofteh
- Abstract summary: Speech Emotion Recognition (SER) is of great importance in Human-Computer Interaction (HCI)
In this article, we examine various deep learning techniques on the SheEMO dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Speech Emotion Recognition (SER) is of great importance in Human-Computer
Interaction (HCI), as it provides a deeper understanding of the situation and
results in better interaction. In recent years, various machine learning and
deep learning algorithms have been developed to improve SER techniques.
Recognition of emotions depends on the type of expression that varies between
different languages. In this article, to further study this important factor in
Farsi, we examine various deep learning techniques on the SheEMO dataset. Using
signal features in low- and high-level descriptions and different deep networks
and machine learning techniques, Unweighted Average Recall (UAR) of 65.20 is
achieved with an accuracy of 78.29.
Related papers
- Speech Emotion Recognition Using CNN and Its Use Case in Digital Healthcare [0.0]
The process of identifying human emotion and affective states from speech is known as speech emotion recognition (SER)
My research seeks to use the Convolutional Neural Network (CNN) to distinguish emotions from audio recordings and label them in accordance with the range of different emotions.
I have developed a machine learning model to identify emotions from supplied audio files with the aid of machine learning methods.
arXiv Detail & Related papers (2024-06-15T21:33:03Z) - Speech and Text-Based Emotion Recognizer [0.9168634432094885]
We build a balanced corpus from publicly available datasets for speech emotion recognition.
Our best system, a multi-modal speech, and text-based model, provides a performance of UA(Unweighed Accuracy) + WA (Weighed Accuracy) of 157.57 compared to the baseline algorithm performance of 119.66.
arXiv Detail & Related papers (2023-12-10T05:17:39Z) - Implementation of AI Deep Learning Algorithm For Multi-Modal Sentiment
Analysis [0.9065034043031668]
A multi-modal emotion recognition method was established by combining two-channel convolutional neural network with ring network.
The words were vectorized with GloVe, and the word vector was input into the convolutional neural network.
arXiv Detail & Related papers (2023-11-19T05:49:39Z) - Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on
Data-Driven Deep Learning [70.30713251031052]
We propose a data-driven deep learning model, i.e. StrengthNet, to improve the generalization of emotion strength assessment for seen and unseen speech.
Experiments show that the predicted emotion strength of the proposed StrengthNet is highly correlated with ground truth scores for both seen and unseen speech.
arXiv Detail & Related papers (2022-06-15T01:25:32Z) - Deep Learning for Visual Speech Analysis: A Survey [54.53032361204449]
This paper presents a review of recent progress in deep learning methods on visual speech analysis.
We cover different aspects of visual speech, including fundamental problems, challenges, benchmark datasets, a taxonomy of existing methods, and state-of-the-art performance.
arXiv Detail & Related papers (2022-05-22T14:44:53Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Emotion Recognition from Multiple Modalities: Fundamentals and
Methodologies [106.62835060095532]
We discuss several key aspects of multi-modal emotion recognition (MER)
We begin with a brief introduction on widely used emotion representation models and affective modalities.
We then summarize existing emotion annotation strategies and corresponding computational tasks.
Finally, we outline several real-world applications and discuss some future directions.
arXiv Detail & Related papers (2021-08-18T21:55:20Z) - Leveraging Recent Advances in Deep Learning for Audio-Visual Emotion
Recognition [2.1485350418225244]
Spontaneous multi-modal emotion recognition has been extensively studied for human behavior analysis.
We propose a new deep learning-based approach for audio-visual emotion recognition.
arXiv Detail & Related papers (2021-03-16T15:49:15Z) - Target Guided Emotion Aware Chat Machine [58.8346820846765]
The consistency of a response to a given post at semantic-level and emotional-level is essential for a dialogue system to deliver human-like interactions.
This article proposes a unifed end-to-end neural architecture, which is capable of simultaneously encoding the semantics and the emotions in a post.
arXiv Detail & Related papers (2020-11-15T01:55:37Z) - Emotion Recognition in Audio and Video Using Deep Neural Networks [9.694548197876868]
With advancement of deep learning technology there has been significant improvement of speech recognition.
Recognizing emotion from speech is important aspect and with deep learning technology emotion recognition has improved in accuracy and latency.
In this work, we attempt to explore different neural networks to improve accuracy of emotion recognition.
arXiv Detail & Related papers (2020-06-15T04:50:18Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.