On-Device Transfer Learning for Personalising Psychological Stress
Modelling using a Convolutional Neural Network
- URL: http://arxiv.org/abs/2004.01603v1
- Date: Fri, 3 Apr 2020 14:48:36 GMT
- Title: On-Device Transfer Learning for Personalising Psychological Stress
Modelling using a Convolutional Neural Network
- Authors: Kieran Woodward, Eiman Kanjo, David J. Brown and T.M. McGinnity
- Abstract summary: Stress is a growing concern in modern society adversely impacting the wider population more than ever before.
The accurate inference of stress may result in the possibility for personalised interventions.
We propose the development of a personalised, cross-domain 1D CNN by utilising transfer learning from an initial base model trained using data from 20 participants completing a stressor experiment.
- Score: 4.40450723619303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Stress is a growing concern in modern society adversely impacting the wider
population more than ever before. The accurate inference of stress may result
in the possibility for personalised interventions. However, individual
differences between people limits the generalisability of machine learning
models to infer emotions as people's physiology when experiencing the same
emotions widely varies. In addition, it is time consuming and extremely
challenging to collect large datasets of individuals' emotions as it relies on
users labelling sensor data in real-time for extended periods. We propose the
development of a personalised, cross-domain 1D CNN by utilising transfer
learning from an initial base model trained using data from 20 participants
completing a controlled stressor experiment. By utilising physiological sensors
(HR, HRV EDA) embedded within edge computing interfaces that additionally
contain a labelling technique, it is possible to collect a small real-world
personal dataset that can be used for on-device transfer learning to improve
model personalisation and cross-domain performance.
Related papers
- Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - WEARS: Wearable Emotion AI with Real-time Sensor data [0.8740570557632509]
We propose a system to predict user emotion using smartwatch sensors.
We design a framework to collect ground truth in real-time utilizing a mix of English and regional language-based videos.
We also did an ablation study to understand the impact of features including Heart Rate, Accelerometer, and Gyroscope sensor data on mood.
arXiv Detail & Related papers (2023-08-22T11:03:00Z) - A Novel Loss Function Utilizing Wasserstein Distance to Reduce
Subject-Dependent Noise for Generalizable Models in Affective Computing [0.4818210066519976]
Emotions are an essential part of human behavior that can impact thinking, decision-making, and communication skills.
The ability to accurately monitor and identify emotions can be useful in many human-centered applications such as behavioral training, tracking emotional well-being, and development of human-computer interfaces.
arXiv Detail & Related papers (2023-08-17T01:15:26Z) - Personalization of Stress Mobile Sensing using Self-Supervised Learning [1.7598252755538808]
Stress is widely recognized as a major contributor to a variety of health issues.
Real-time stress prediction can enable digital interventions to immediately react at the onset of stress, helping to avoid many psychological and physiological symptoms such as heart rhythm irregularities.
However, major challenges with the prediction of stress using machine learning include the subjectivity and sparseness of the labels, a large feature space, relatively few labels, and a complex nonlinear and subjective relationship between the features and outcomes.
arXiv Detail & Related papers (2023-08-04T22:26:33Z) - Data-driven emotional body language generation for social robotics [58.88028813371423]
In social robotics, endowing humanoid robots with the ability to generate bodily expressions of affect can improve human-robot interaction and collaboration.
We implement a deep learning data-driven framework that learns from a few hand-designed robotic bodily expressions.
The evaluation study found that the anthropomorphism and animacy of the generated expressions are not perceived differently from the hand-designed ones.
arXiv Detail & Related papers (2022-05-02T09:21:39Z) - The world seems different in a social context: a neural network analysis
of human experimental data [57.729312306803955]
We show that it is possible to replicate human behavioral data in both individual and social task settings by modifying the precision of prior and sensory signals.
An analysis of the neural activation traces of the trained networks provides evidence that information is coded in fundamentally different ways in the network in the individual and in the social conditions.
arXiv Detail & Related papers (2022-03-03T17:19:12Z) - User profile-driven large-scale multi-agent learning from demonstration
in federated human-robot collaborative environments [5.218882272051637]
This paper introduces a novel user profile formulation for providing a fine-grained representation of the exhibited human behavior.
The overall designed scheme enables both short- and long-term analysis/interpretation of the human behavior.
arXiv Detail & Related papers (2021-03-30T15:33:21Z) - Cognitive architecture aided by working-memory for self-supervised
multi-modal humans recognition [54.749127627191655]
The ability to recognize human partners is an important social skill to build personalized and long-term human-robot interactions.
Deep learning networks have achieved state-of-the-art results and demonstrated to be suitable tools to address such a task.
One solution is to make robots learn from their first-hand sensory data with self-supervision.
arXiv Detail & Related papers (2021-03-16T13:50:24Z) - Combining Deep Transfer Learning with Signal-image Encoding for
Multi-Modal Mental Wellbeing Classification [2.513785998932353]
This paper proposes a framework to tackle the limitation in performing emotional state recognition on multiple multimodal datasets.
We show that model performance when inferring real-world wellbeing rated on a 5-point Likert scale can be enhanced using our framework.
arXiv Detail & Related papers (2020-11-20T13:37:23Z) - Continuous Emotion Recognition with Spatiotemporal Convolutional Neural
Networks [82.54695985117783]
We investigate the suitability of state-of-the-art deep learning architectures for continuous emotion recognition using long video sequences captured in-the-wild.
We have developed and evaluated convolutional recurrent neural networks combining 2D-CNNs and long short term-memory units, and inflated 3D-CNN models, which are built by inflating the weights of a pre-trained 2D-CNN model during fine-tuning.
arXiv Detail & Related papers (2020-11-18T13:42:05Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.