VR-LENS: Super Learning-based Cybersickness Detection and Explainable
AI-Guided Deployment in Virtual Reality
- URL: http://arxiv.org/abs/2302.01985v1
- Date: Fri, 3 Feb 2023 20:15:51 GMT
- Title: VR-LENS: Super Learning-based Cybersickness Detection and Explainable
AI-Guided Deployment in Virtual Reality
- Authors: Ripan Kumar Kundu, Osama Yahia Elsaid, Prasad Calyam, Khaza Anuarul
Hoque
- Abstract summary: This work presents an explainable artificial intelligence (XAI)-based framework VR-LENS for developing cybersickness detection ML models.
We first develop a novel super learning-based ensemble ML model for cybersickness detection.
Our proposed method identified eye tracking, player position, and galvanic skin/heart rate response as the most dominant features for the integrated sensor, gameplay, and bio-physiological datasets.
- Score: 1.9642496463491053
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: A plethora of recent research has proposed several automated methods based on
machine learning (ML) and deep learning (DL) to detect cybersickness in Virtual
reality (VR). However, these detection methods are perceived as computationally
intensive and black-box methods. Thus, those techniques are neither trustworthy
nor practical for deploying on standalone VR head-mounted displays (HMDs). This
work presents an explainable artificial intelligence (XAI)-based framework
VR-LENS for developing cybersickness detection ML models, explaining them,
reducing their size, and deploying them in a Qualcomm Snapdragon 750G
processor-based Samsung A52 device. Specifically, we first develop a novel
super learning-based ensemble ML model for cybersickness detection. Next, we
employ a post-hoc explanation method, such as SHapley Additive exPlanations
(SHAP), Morris Sensitivity Analysis (MSA), Local Interpretable Model-Agnostic
Explanations (LIME), and Partial Dependence Plot (PDP) to explain the expected
results and identify the most dominant features. The super learner
cybersickness model is then retrained using the identified dominant features.
Our proposed method identified eye tracking, player position, and galvanic
skin/heart rate response as the most dominant features for the integrated
sensor, gameplay, and bio-physiological datasets. We also show that the
proposed XAI-guided feature reduction significantly reduces the model training
and inference time by 1.91X and 2.15X while maintaining baseline accuracy. For
instance, using the integrated sensor dataset, our reduced super learner model
outperforms the state-of-the-art works by classifying cybersickness into 4
classes (none, low, medium, and high) with an accuracy of 96% and regressing
(FMS 1-10) with a Root Mean Square Error (RMSE) of 0.03.
Related papers
- In-Simulation Testing of Deep Learning Vision Models in Autonomous Robotic Manipulators [11.389756788049944]
Testing autonomous robotic manipulators is challenging due to the complex software interactions between vision and control components.
A crucial element of modern robotic manipulators is the deep learning based object detection model.
We propose the MARTENS framework, which integrates a photorealistic NVIDIA Isaac Sim simulator with evolutionary search to identify critical scenarios.
arXiv Detail & Related papers (2024-10-25T03:10:42Z) - Explainable AI for Comparative Analysis of Intrusion Detection Models [20.683181384051395]
This research analyzes various machine learning models to the tasks of binary and multi-class classification for intrusion detection from network traffic.
We trained all models to the accuracy of 90% on the UNSW-NB15 dataset.
We also discover that Random Forest provides the best performance in terms of accuracy, time efficiency and robustness.
arXiv Detail & Related papers (2024-06-14T03:11:01Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - LiteVR: Interpretable and Lightweight Cybersickness Detection using
Explainable AI [1.1470070927586016]
Cybersickness is a common ailment associated with virtual reality (VR) user experiences.
We present an explainable artificial intelligence (XAI)-based framework, LiteVR, for cybersickness detection.
arXiv Detail & Related papers (2023-02-05T21:51:12Z) - TruVR: Trustworthy Cybersickness Detection using Explainable Machine
Learning [1.9642496463491053]
Cybersickness can be characterized by nausea, vertigo, headache, eye strain, and other discomforts when using virtual reality (VR) systems.
The previously reported machine learning (ML) and deep learning (DL) algorithms for detecting (classification) and predicting (regression) VR cybersickness use black-box models.
We present three explainable machine learning (xML) models to detect and predict cybersickness.
arXiv Detail & Related papers (2022-09-12T13:55:13Z) - Real-to-Sim: Predicting Residual Errors of Robotic Systems with Sparse
Data using a Learning-based Unscented Kalman Filter [65.93205328894608]
We learn the residual errors between a dynamic and/or simulator model and the real robot.
We show that with the learned residual errors, we can further close the reality gap between dynamic models, simulations, and actual hardware.
arXiv Detail & Related papers (2022-09-07T15:15:12Z) - Incremental Online Learning Algorithms Comparison for Gesture and Visual
Smart Sensors [68.8204255655161]
This paper compares four state-of-the-art algorithms in two real applications: gesture recognition based on accelerometer data and image classification.
Our results confirm these systems' reliability and the feasibility of deploying them in tiny-memory MCUs.
arXiv Detail & Related papers (2022-09-01T17:05:20Z) - Neurosymbolic hybrid approach to driver collision warning [64.02492460600905]
There are two main algorithmic approaches to autonomous driving systems.
Deep learning alone has achieved state-of-the-art results in many areas.
But sometimes it can be very difficult to debug if the deep learning model doesn't work.
arXiv Detail & Related papers (2022-03-28T20:29:50Z) - Utilizing XAI technique to improve autoencoder based model for computer
network anomaly detection with shapley additive explanation(SHAP) [0.0]
Machine learning (ML) and Deep Learning (DL) methods are being adopted rapidly, especially in computer network security.
Lack of transparency of ML and DL based models is a major obstacle to their implementation and criticized due to its black-box nature.
XAI is a promising area that can improve the trustworthiness of these models by giving explanations and interpreting its output.
arXiv Detail & Related papers (2021-12-14T09:42:04Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - One to Many: Adaptive Instrument Segmentation via Meta Learning and
Dynamic Online Adaptation in Robotic Surgical Video [71.43912903508765]
MDAL is a dynamic online adaptive learning scheme for instrument segmentation in robot-assisted surgery.
It learns the general knowledge of instruments and the fast adaptation ability through the video-specific meta-learning paradigm.
It outperforms other state-of-the-art methods on two datasets.
arXiv Detail & Related papers (2021-03-24T05:02:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.