Facial Electromyography-based Adaptive Virtual Reality Gaming for
Cognitive Training
- URL: http://arxiv.org/abs/2005.05023v3
- Date: Sun, 30 Aug 2020 13:42:12 GMT
- Title: Facial Electromyography-based Adaptive Virtual Reality Gaming for
Cognitive Training
- Authors: Lorcan Reidy, Dennis Chan, Charles Nduka and Hatice Gunes
- Abstract summary: Two frequently cited problems in cognitive training literature are a lack of user engagement with the training programme, and a failure of developed skills to generalise to daily life.
This paper introduces a new cognitive training (CT) paradigm designed to address these two limitations.
It incorporates facial electromyography (EMG) as a means of determining user affect while engaged in the CT task.
This information is then utilised to dynamically adjust the game's difficulty in real-time as users play, with the aim of leading them into a state of flow.
- Score: 5.033176361795483
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Cognitive training has shown promising results for delivering improvements in
human cognition related to attention, problem solving, reading comprehension
and information retrieval. However, two frequently cited problems in cognitive
training literature are a lack of user engagement with the training programme,
and a failure of developed skills to generalise to daily life. This paper
introduces a new cognitive training (CT) paradigm designed to address these two
limitations by combining the benefits of gamification, virtual reality (VR),
and affective adaptation in the development of an engaging, ecologically valid,
CT task. Additionally, it incorporates facial electromyography (EMG) as a means
of determining user affect while engaged in the CT task. This information is
then utilised to dynamically adjust the game's difficulty in real-time as users
play, with the aim of leading them into a state of flow. Affect recognition
rates of 64.1% and 76.2%, for valence and arousal respectively, were achieved
by classifying a DWT-Haar approximation of the input signal using kNN. The
affect-aware VR cognitive training intervention was then evaluated with a
control group of older adults. The results obtained substantiate the notion
that adaptation techniques can lead to greater feelings of competence and a
more appropriate challenge of the user's skills.
Related papers
- The Potential and Value of AI Chatbot in Personalized Cognitive Training [10.337496606986566]
ReMe is a web-based framework designed to create AI chatbots that facilitate cognitive training research.
By leveraging large language models, ReMe provides enhanced user-friendly, interactive, and personalized training experiences.
Case studies demonstrate ReMe's effectiveness in engaging users through life recall and open-ended language puzzles.
arXiv Detail & Related papers (2024-10-25T17:59:36Z) - Auto Detecting Cognitive Events Using Machine Learning on Pupillary Data [0.0]
Pupil size is a valuable indicator of cognitive workload, reflecting changes in attention and arousal governed by the autonomic nervous system.
This study explores the potential of using machine learning to automatically detect cognitive events experienced using individuals.
arXiv Detail & Related papers (2024-10-18T04:54:46Z) - Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Alleviating Catastrophic Forgetting in Facial Expression Recognition with Emotion-Centered Models [49.3179290313959]
The proposed method, emotion-centered generative replay (ECgr), tackles this challenge by integrating synthetic images from generative adversarial networks.
ECgr incorporates a quality assurance algorithm to ensure the fidelity of generated images.
The experimental results on four diverse facial expression datasets demonstrate that incorporating images generated by our pseudo-rehearsal method enhances training on the targeted dataset and the source dataset.
arXiv Detail & Related papers (2024-04-18T15:28:34Z) - EEG-based Cognitive Load Classification using Feature Masked
Autoencoding and Emotion Transfer Learning [13.404503606887715]
We present a new solution for the classification of cognitive load using electroencephalogram (EEG)
We pre-train our model using self-supervised masked autoencoding on emotion-related EEG datasets.
The results of our experiments show that our proposed approach achieves strong results and outperforms conventional single-stage fully supervised learning.
arXiv Detail & Related papers (2023-08-01T02:59:19Z) - Evaluating the structure of cognitive tasks with transfer learning [67.22168759751541]
This study investigates the transferability of deep learning representations between different EEG decoding tasks.
We conduct extensive experiments using state-of-the-art decoding models on two recently released EEG datasets.
arXiv Detail & Related papers (2023-07-28T14:51:09Z) - Modeling cognitive load as a self-supervised brain rate with
electroencephalography and deep learning [2.741266294612776]
This research presents a novel self-supervised method for mental workload modelling from EEG data.
The method is a convolutional recurrent neural network trainable with spatially preserving spectral topographic head-maps from EEG data to fit the brain rate variable.
Findings point to the existence of quasi-stable blocks of learnt high-level representations of cognitive activation because they can be induced through convolution and seem not to be dependent on each other over time, intuitively matching the non-stationary nature of brain responses.
arXiv Detail & Related papers (2022-09-21T07:44:21Z) - CogAlign: Learning to Align Textual Neural Representations to Cognitive
Language Processing Signals [60.921888445317705]
We propose a CogAlign approach to integrate cognitive language processing signals into natural language processing models.
We show that CogAlign achieves significant improvements with multiple cognitive features over state-of-the-art models on public datasets.
arXiv Detail & Related papers (2021-06-10T07:10:25Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.