Speech-Based Emotion Recognition using Neural Networks and Information
Visualization
- URL: http://arxiv.org/abs/2010.15229v1
- Date: Wed, 28 Oct 2020 20:57:32 GMT
- Title: Speech-Based Emotion Recognition using Neural Networks and Information
Visualization
- Authors: Jumana Almahmoud and Kruthika Kikkeri
- Abstract summary: We propose a tool which enables users to take speech samples and identify a range of emotions from audio elements.
The dashboard is designed based on local therapists' needs for intuitive representations of speech data.
- Score: 1.52292571922932
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Emotions recognition is commonly employed for health assessment. However, the
typical metric for evaluation in therapy is based on patient-doctor appraisal.
This process can fall into the issue of subjectivity, while also requiring
healthcare professionals to deal with copious amounts of information. Thus,
machine learning algorithms can be a useful tool for the classification of
emotions. While several models have been developed in this domain, there is a
lack of userfriendly representations of the emotion classification systems for
therapy. We propose a tool which enables users to take speech samples and
identify a range of emotions (happy, sad, angry, surprised, neutral, clam,
disgust, and fear) from audio elements through a machine learning model. The
dashboard is designed based on local therapists' needs for intuitive
representations of speech data in order to gain insights and informative
analyses of their sessions with their patients.
Related papers
- Towards Empathetic Conversational Recommender Systems [77.53167131692]
We propose an empathetic conversational recommender (ECR) framework.
ECR contains two main modules: emotion-aware item recommendation and emotion-aligned response generation.
Our experiments on the ReDial dataset validate the efficacy of our framework in enhancing recommendation accuracy and improving user satisfaction.
arXiv Detail & Related papers (2024-08-30T15:43:07Z) - Speech Emotion Recognition Using CNN and Its Use Case in Digital Healthcare [0.0]
The process of identifying human emotion and affective states from speech is known as speech emotion recognition (SER)
My research seeks to use the Convolutional Neural Network (CNN) to distinguish emotions from audio recordings and label them in accordance with the range of different emotions.
I have developed a machine learning model to identify emotions from supplied audio files with the aid of machine learning methods.
arXiv Detail & Related papers (2024-06-15T21:33:03Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Emotion Rendering for Conversational Speech Synthesis with Heterogeneous
Graph-Based Context Modeling [50.99252242917458]
Conversational Speech Synthesis (CSS) aims to accurately express an utterance with the appropriate prosody and emotional inflection within a conversational setting.
To address the issue of data scarcity, we meticulously create emotional labels in terms of category and intensity.
Our model outperforms the baseline models in understanding and rendering emotions.
arXiv Detail & Related papers (2023-12-19T08:47:50Z) - An Approach for Improving Automatic Mouth Emotion Recognition [1.5293427903448025]
The study proposes and tests a technique for automated emotion recognition through mouth detection via Convolutional Neural Networks (CNN)
The technique is meant to be applied for supporting people with health disorders with communication skills issues.
arXiv Detail & Related papers (2022-12-12T16:17:21Z) - Accurate Emotion Strength Assessment for Seen and Unseen Speech Based on
Data-Driven Deep Learning [70.30713251031052]
We propose a data-driven deep learning model, i.e. StrengthNet, to improve the generalization of emotion strength assessment for seen and unseen speech.
Experiments show that the predicted emotion strength of the proposed StrengthNet is highly correlated with ground truth scores for both seen and unseen speech.
arXiv Detail & Related papers (2022-06-15T01:25:32Z) - Emotion Recognition for Healthcare Surveillance Systems Using Neural
Networks: A Survey [8.31246680772592]
We present recent research in the field of using neural networks to recognize emotions.
We focus on studying emotions' recognition from speech, facial expressions, and audio-visual input.
These three emotion recognition techniques can be used as a surveillance system in healthcare centers to monitor patients.
arXiv Detail & Related papers (2021-07-13T11:17:00Z) - Emotion Recognition of the Singing Voice: Toward a Real-Time Analysis
Tool for Singers [0.0]
Current computational-emotion research has focused on applying acoustic properties to analyze how emotions are perceived mathematically.
This paper seeks to reflect and expand upon the findings of related research and present a stepping-stone toward this end goal.
arXiv Detail & Related papers (2021-05-01T05:47:15Z) - Pose-based Body Language Recognition for Emotion and Psychiatric Symptom
Interpretation [75.3147962600095]
We propose an automated framework for body language based emotion recognition starting from regular RGB videos.
In collaboration with psychologists, we extend the framework for psychiatric symptom prediction.
Because a specific application domain of the proposed framework may only supply a limited amount of data, the framework is designed to work on a small training set.
arXiv Detail & Related papers (2020-10-30T18:45:16Z) - Emotion Recognition System from Speech and Visual Information based on
Convolutional Neural Networks [6.676572642463495]
We propose a system that is able to recognize emotions with a high accuracy rate and in real time.
In order to increase the accuracy of the recognition system, we analyze also the speech data and fuse the information coming from both sources.
arXiv Detail & Related papers (2020-02-29T22:09:46Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.