Real-time EEG-based Emotion Recognition Model using Principal Component
Analysis and Tree-based Models for Neurohumanities
- URL: http://arxiv.org/abs/2401.15743v1
- Date: Sun, 28 Jan 2024 20:02:13 GMT
- Title: Real-time EEG-based Emotion Recognition Model using Principal Component
Analysis and Tree-based Models for Neurohumanities
- Authors: Miguel A. Blanco-Rios, Milton O. Candela-Leal, Cecilia Orozco-Romo,
Paulina Remis-Serna, Carol S. Velez-Saboya, Jorge De-J. Lozoya-Santos, Manuel
Cebral-Loureda, Mauricio A. Ramirez-Moreno
- Abstract summary: This project proposes a solution by incorporating emotional monitoring during the learning process of context inside an immersive space.
A real-time emotion detection EEG-based system was developed to interpret and classify specific emotions.
This system aims to integrate emotional data into the Neurohumanities Lab interactive platform, creating a comprehensive and immersive learning environment.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Within the field of Humanities, there is a recognized need for educational
innovation, as there are currently no reported tools available that enable
individuals to interact with their environment to create an enhanced learning
experience in the humanities (e.g., immersive spaces). This project proposes a
solution to address this gap by integrating technology and promoting the
development of teaching methodologies in the humanities, specifically by
incorporating emotional monitoring during the learning process of humanistic
context inside an immersive space. In order to achieve this goal, a real-time
emotion detection EEG-based system was developed to interpret and classify
specific emotions. These emotions aligned with the early proposal by Descartes
(Passions), including admiration, love, hate, desire, joy, and sadness. This
system aims to integrate emotional data into the Neurohumanities Lab
interactive platform, creating a comprehensive and immersive learning
environment. This work developed a ML, real-time emotion detection model that
provided Valence, Arousal, and Dominance (VAD) estimations every 5 seconds.
Using PCA, PSD, RF, and Extra-Trees, the best 8 channels and their respective
best band powers were extracted; furthermore, multiple models were evaluated
using shift-based data division and cross-validations. After assessing their
performance, Extra-Trees achieved a general accuracy of 96%, higher than the
reported in the literature (88% accuracy). The proposed model provided
real-time predictions of VAD variables and was adapted to classify Descartes'
six main passions. However, with the VAD values obtained, more than 15 emotions
can be classified (reported in the VAD emotion mapping) and extend the range of
this application.
Related papers
- MEMO-Bench: A Multiple Benchmark for Text-to-Image and Multimodal Large Language Models on Human Emotion Analysis [53.012111671763776]
This study introduces MEMO-Bench, a comprehensive benchmark consisting of 7,145 portraits, each depicting one of six different emotions.
Results demonstrate that existing T2I models are more effective at generating positive emotions than negative ones.
Although MLLMs show a certain degree of effectiveness in distinguishing and recognizing human emotions, they fall short of human-level accuracy.
arXiv Detail & Related papers (2024-11-18T02:09:48Z) - Emotion Detection through Body Gesture and Face [0.0]
The project addresses the challenge of emotion recognition by focusing on non-facial cues, specifically hands, body gestures, and gestures.
Traditional emotion recognition systems mainly rely on facial expression analysis and often ignore the rich emotional information conveyed through body language.
The project aims to contribute to the field of affective computing by enhancing the ability of machines to interpret and respond to human emotions in a more comprehensive and nuanced way.
arXiv Detail & Related papers (2024-07-13T15:15:50Z) - Generative Technology for Human Emotion Recognition: A Scope Review [11.578408396744237]
This survey aims to bridge the gaps in the existing literature by conducting a comprehensive analysis of over 320 research papers until June 2024.
It will introduce the mathematical principles of different generative models and the commonly used datasets.
It will provide an in-depth analysis of how generative techniques address emotion recognition based on different modalities.
arXiv Detail & Related papers (2024-07-04T05:22:55Z) - Emotion Recognition from the perspective of Activity Recognition [0.0]
Appraising human emotional states, behaviors, and reactions displayed in real-world settings can be accomplished using latent continuous dimensions.
For emotion recognition systems to be deployed and integrated into real-world mobile and computing devices, we need to consider data collected in the world.
We propose a novel three-stream end-to-end deep learning regression pipeline with an attention mechanism.
arXiv Detail & Related papers (2024-03-24T18:53:57Z) - A Hierarchical Regression Chain Framework for Affective Vocal Burst
Recognition [72.36055502078193]
We propose a hierarchical framework, based on chain regression models, for affective recognition from vocal bursts.
To address the challenge of data sparsity, we also use self-supervised learning (SSL) representations with layer-wise and temporal aggregation modules.
The proposed systems participated in the ACII Affective Vocal Burst (A-VB) Challenge 2022 and ranked first in the "TWO'' and "CULTURE" tasks.
arXiv Detail & Related papers (2023-03-14T16:08:45Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - Enhancing Cognitive Models of Emotions with Representation Learning [58.2386408470585]
We present a novel deep learning-based framework to generate embedding representations of fine-grained emotions.
Our framework integrates a contextualized embedding encoder with a multi-head probing model.
Our model is evaluated on the Empathetic Dialogue dataset and shows the state-of-the-art result for classifying 32 emotions.
arXiv Detail & Related papers (2021-04-20T16:55:15Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Cross-individual Recognition of Emotions by a Dynamic Entropy based on
Pattern Learning with EEG features [2.863100352151122]
We propose a deep-learning framework denoted as a dynamic entropy-based pattern learning (DEPL) to abstract informative indicators pertaining to the neurophysiological features among multiple individuals.
DEPL enhanced the capability of representations generated by a deep convolutional neural network by modelling the interdependencies between the cortical locations of dynamical entropy based features.
arXiv Detail & Related papers (2020-09-26T07:22:07Z) - Meta Transfer Learning for Emotion Recognition [42.61707533351803]
We propose a PathNet-based transfer learning method that is able to transfer emotional knowledge learned from one visual/audio emotion domain to another visual/audio emotion domain.
Our proposed system is capable of improving the performance of emotion recognition, making its performance substantially superior to the recent proposed fine-tuning/pre-trained models based transfer learning methods.
arXiv Detail & Related papers (2020-06-23T00:25:28Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.