MDEAW: A Multimodal Dataset for Emotion Analysis through EDA and PPG
signals from wireless wearable low-cost off-the-shelf Devices
- URL: http://arxiv.org/abs/2207.06410v1
- Date: Thu, 14 Jul 2022 07:04:29 GMT
- Title: MDEAW: A Multimodal Dataset for Emotion Analysis through EDA and PPG
signals from wireless wearable low-cost off-the-shelf Devices
- Authors: Arijit Nandi, Fatos Xhafa, Laia Subirats, Santi Fort
- Abstract summary: We present MDEAW, a multimodal database consisting of Electrodermal Activity (EDA) and Photoplethys ( PPG) signals recorded during the exams for the course taught by the teacher at Eurecat Academy, Sabadell, Barcelona.
Signals were captured using portable, wearable, wireless, low-cost, and off-the-shelf equipment that has the potential to allow the use of affective computing methods in everyday applications.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We present MDEAW, a multimodal database consisting of Electrodermal Activity
(EDA) and Photoplethysmography (PPG) signals recorded during the exams for the
course taught by the teacher at Eurecat Academy, Sabadell, Barcelona in order
to elicit the emotional reactions to the students in a classroom scenario.
Signals from 10 students were recorded along with the students' self-assessment
of their affective state after each stimulus, in terms of 6 basic emotion
states. All the signals were captured using portable, wearable, wireless,
low-cost, and off-the-shelf equipment that has the potential to allow the use
of affective computing methods in everyday applications. A baseline for
student-wise affect recognition using EDA and PPG-based features, as well as
their fusion, was established through ReMECS, Fed-ReMECS, and Fed-ReMECS-U.
These results indicate the prospects of using low-cost devices for affective
state recognition applications. The proposed database will be made publicly
available in order to allow researchers to achieve a more thorough evaluation
of the suitability of these capturing devices for emotion state recognition
applications.
Related papers
- Online Multi-level Contrastive Representation Distillation for Cross-Subject fNIRS Emotion Recognition [11.72499878247794]
We propose a novel cross-subject fNIRS emotion recognition method, called the Online Multi-level Contrastive Representation Distillation framework (OMCRD)
OMCRD is a framework designed for mutual learning among multiple lightweight student networks.
Some experimental results demonstrate that OMCRD achieves state-of-the-art results in emotional perception and affective imagery tasks.
arXiv Detail & Related papers (2024-09-24T13:30:15Z) - Masked Video and Body-worn IMU Autoencoder for Egocentric Action Recognition [24.217068565936117]
We present a novel method for action recognition that integrates motion data from body-worn IMUs with egocentric video.
To model the complex relation of multiple IMU devices placed across the body, we exploit the collaborative dynamics in multiple IMU devices.
Experiments show our method can achieve state-of-the-art performance on multiple public datasets.
arXiv Detail & Related papers (2024-07-09T07:53:16Z) - Joint Contrastive Learning with Feature Alignment for Cross-Corpus EEG-based Emotion Recognition [2.1645626994550664]
We propose a novel Joint Contrastive learning framework with Feature Alignment to address cross-corpus EEG-based emotion recognition.
In the pre-training stage, a joint domain contrastive learning strategy is introduced to characterize generalizable time-frequency representations of EEG signals.
In the fine-tuning stage, JCFA is refined in conjunction with downstream tasks, where the structural connections among brain electrodes are considered.
arXiv Detail & Related papers (2024-04-15T08:21:17Z) - Real-time EEG-based Emotion Recognition Model using Principal Component
Analysis and Tree-based Models for Neurohumanities [0.0]
This project proposes a solution by incorporating emotional monitoring during the learning process of context inside an immersive space.
A real-time emotion detection EEG-based system was developed to interpret and classify specific emotions.
This system aims to integrate emotional data into the Neurohumanities Lab interactive platform, creating a comprehensive and immersive learning environment.
arXiv Detail & Related papers (2024-01-28T20:02:13Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - MEmoBERT: Pre-training Model with Prompt-based Learning for Multimodal
Emotion Recognition [118.73025093045652]
We propose a pre-training model textbfMEmoBERT for multimodal emotion recognition.
Unlike the conventional "pre-train, finetune" paradigm, we propose a prompt-based method that reformulates the downstream emotion classification task as a masked text prediction.
Our proposed MEmoBERT significantly enhances emotion recognition performance.
arXiv Detail & Related papers (2021-10-27T09:57:00Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z) - Micro-expression spotting: A new benchmark [74.69928316848866]
Micro-expressions (MEs) are brief and involuntary facial expressions that occur when people are trying to hide their true feelings or conceal their emotions.
In the computer vision field, the study of MEs can be divided into two main tasks, spotting and recognition.
This paper introduces an extension of the SMIC-E database, namely the SMIC-E-Long database, which is a new challenging benchmark for ME spotting.
arXiv Detail & Related papers (2020-07-24T09:18:41Z) - Detecting Parkinsonian Tremor from IMU Data Collected In-The-Wild using
Deep Multiple-Instance Learning [59.74684475991192]
Parkinson's Disease (PD) is a slowly evolving neuro-logical disease that affects about 1% of the population above 60 years old.
PD symptoms include tremor, rigidity and braykinesia.
We present a method for automatically identifying tremorous episodes related to PD, based on IMU signals captured via a smartphone device.
arXiv Detail & Related papers (2020-05-06T09:02:30Z) - DeepBrain: Towards Personalized EEG Interaction through Attentional and
Embedded LSTM Learning [20.300051894095173]
We propose an end-to-end solution that enables fine brain-robot interaction (BRI) through embedded learning of coarse EEG signals from the low-cost devices, namely DeepBrain.
Our contributions are two folds: 1) We present a stacked long short term memory (Stacked LSTM) structure with specific pre-processing techniques to handle the time-dependency of EEG signals and their classification.
Our real-world experiments demonstrate that the proposed end-to-end solution with low cost can achieve satisfactory run-time speed, accuracy and energy-efficiency.
arXiv Detail & Related papers (2020-02-06T03:34:08Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.