EmoSens: Emotion Recognition based on Sensor data analysis using
LightGBM
- URL: http://arxiv.org/abs/2207.14640v1
- Date: Tue, 12 Jul 2022 13:52:32 GMT
- Title: EmoSens: Emotion Recognition based on Sensor data analysis using
LightGBM
- Authors: Gayathri S, Akshat Anand, Astha Vijayvargiya, Pushpalatha M, Vaishnavi
Moorthy, Sumit Kumar, Harichandana B S S
- Abstract summary: The study examines the performance of various supervised learning models such as Decision Trees, Random Forests, XGBoost, LightGBM on the dataset.
With our proposed model, we obtained a high recognition rate of 92.5% using XGBoost and LightGBM for 9 different emotion classes.
- Score: 1.6197357532363172
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Smart wearables have played an integral part in our day to day life. From
recording ECG signals to analysing body fat composition, the smart wearables
can do it all. The smart devices encompass various sensors which can be
employed to derive meaningful information regarding the user's physical and
psychological conditions. Our approach focuses on employing such sensors to
identify and obtain the variations in the mood of a user at a given instance
through the use of supervised machine learning techniques. The study examines
the performance of various supervised learning models such as Decision Trees,
Random Forests, XGBoost, LightGBM on the dataset. With our proposed model, we
obtained a high recognition rate of 92.5% using XGBoost and LightGBM for 9
different emotion classes. By utilizing this, we aim to improvise and suggest
methods to aid emotion recognition for better mental health analysis and mood
monitoring.
Related papers
- SensEmo: Enabling Affective Learning through Real-time Emotion Recognition with Smartwatches [3.7303587372123315]
SensEmo is a smartwatch-based system designed for affective learning.
SensEmo recognizes student emotion with an average of 88.9% accuracy.
SensEmo assists students to achieve better online learning outcomes.
arXiv Detail & Related papers (2024-07-13T15:10:58Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Automatic Sensor-free Affect Detection: A Systematic Literature Review [0.0]
This paper provides a comprehensive literature review on sensor-free affect detection.
Despite the field's evident maturity, demonstrated by the consistent performance of the models, there is ample scope for future research.
There is also a need to refine model development practices and methods.
arXiv Detail & Related papers (2023-10-11T13:24:27Z) - WEARS: Wearable Emotion AI with Real-time Sensor data [0.8740570557632509]
We propose a system to predict user emotion using smartwatch sensors.
We design a framework to collect ground truth in real-time utilizing a mix of English and regional language-based videos.
We also did an ablation study to understand the impact of features including Heart Rate, Accelerometer, and Gyroscope sensor data on mood.
arXiv Detail & Related papers (2023-08-22T11:03:00Z) - Emotion Analysis on EEG Signal Using Machine Learning and Neural Network [0.0]
The main purpose of this study is to improve ways to improve emotion recognition performance using brain signals.
Various approaches to human-machine interaction technologies have been ongoing for a long time, and in recent years, researchers have had great success in automatically understanding emotion using brain signals.
arXiv Detail & Related papers (2023-07-09T09:50:34Z) - A Real-time Human Pose Estimation Approach for Optimal Sensor Placement
in Sensor-based Human Activity Recognition [63.26015736148707]
This paper introduces a novel methodology to resolve the issue of optimal sensor placement for Human Activity Recognition.
The derived skeleton data provides a unique strategy for identifying the optimal sensor location.
Our findings indicate that the vision-based method for sensor placement offers comparable results to the conventional deep learning approach.
arXiv Detail & Related papers (2023-07-06T10:38:14Z) - Multimodal Emotion Recognition using Transfer Learning from Speaker
Recognition and BERT-based models [53.31917090073727]
We propose a neural network-based emotion recognition framework that uses a late fusion of transfer-learned and fine-tuned models from speech and text modalities.
We evaluate the effectiveness of our proposed multimodal approach on the interactive emotional dyadic motion capture dataset.
arXiv Detail & Related papers (2022-02-16T00:23:42Z) - EEGminer: Discovering Interpretable Features of Brain Activity with
Learnable Filters [72.19032452642728]
We propose a novel differentiable EEG decoding pipeline consisting of learnable filters and a pre-determined feature extraction module.
We demonstrate the utility of our model towards emotion recognition from EEG signals on the SEED dataset and on a new EEG dataset of unprecedented size.
The discovered features align with previous neuroscience studies and offer new insights, such as marked differences in the functional connectivity profile between left and right temporal areas during music listening.
arXiv Detail & Related papers (2021-10-19T14:22:04Z) - SensiX: A Platform for Collaborative Machine Learning on the Edge [69.1412199244903]
We present SensiX, a personal edge platform that stays between sensor data and sensing models.
We demonstrate its efficacy in developing motion and audio-based multi-device sensing systems.
Our evaluation shows that SensiX offers a 7-13% increase in overall accuracy and up to 30% increase across different environment dynamics at the expense of 3mW power overhead.
arXiv Detail & Related papers (2020-12-04T23:06:56Z) - Semantics-aware Adaptive Knowledge Distillation for Sensor-to-Vision
Action Recognition [131.6328804788164]
We propose a framework, named Semantics-aware Adaptive Knowledge Distillation Networks (SAKDN), to enhance action recognition in vision-sensor modality (videos)
The SAKDN uses multiple wearable-sensors as teacher modalities and uses RGB videos as student modality.
arXiv Detail & Related papers (2020-09-01T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.