Virtual-Reality based Vestibular Ocular Motor Screening for Concussion
Detection using Machine-Learning
- URL: http://arxiv.org/abs/2210.09295v1
- Date: Thu, 13 Oct 2022 02:09:21 GMT
- Title: Virtual-Reality based Vestibular Ocular Motor Screening for Concussion
Detection using Machine-Learning
- Authors: Khondker Fariha Hossain, Sharif Amit Kamran, Prithul Sarker, Philip
Pavilionis, Isayas Adhanom, Nicholas Murray, Alireza Tavakkoli
- Abstract summary: Sport-related concussion (SRC) depends on sensory information from visual, vestibular, and somatosensory systems.
Current clinical administration of Vestibular/Ocular Motor Screening (VOMS) is subjective and deviates among administrators.
With the advancement of technology, virtual reality (VR) can be utilized to advance the standardization of the VOMS.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Sport-related concussion (SRC) depends on sensory information from visual,
vestibular, and somatosensory systems. At the same time, the current clinical
administration of Vestibular/Ocular Motor Screening (VOMS) is subjective and
deviates among administrators. Therefore, for the assessment and management of
concussion detection, standardization is required to lower the risk of injury
and increase the validation among clinicians. With the advancement of
technology, virtual reality (VR) can be utilized to advance the standardization
of the VOMS, increasing the accuracy of testing administration and decreasing
overall false positive rates. In this paper, we experimented with multiple
machine learning methods to detect SRC on VR-generated data using VOMS. In our
observation, the data generated from VR for smooth pursuit (SP) and the Visual
Motion Sensitivity (VMS) tests are highly reliable for concussion detection.
Furthermore, we train and evaluate these models, both qualitatively and
quantitatively. Our findings show these models can reach high
true-positive-rates of around 99.9 percent of symptom provocation on the VR
stimuli-based VOMS vs. current clinical manual VOMS.
Related papers
- Dr-LLaVA: Visual Instruction Tuning with Symbolic Clinical Grounding [53.629132242389716]
Vision-Language Models (VLM) can support clinicians by analyzing medical images and engaging in natural language interactions.
VLMs often exhibit "hallucinogenic" behavior, generating textual outputs not grounded in contextual multimodal information.
We propose a new alignment algorithm that uses symbolic representations of clinical reasoning to ground VLMs in medical knowledge.
arXiv Detail & Related papers (2024-05-29T23:19:28Z) - A self-attention model for robust rigid slice-to-volume registration of functional MRI [4.615338063719135]
Head motion during fMRI scans can result in distortion, biased analyses, and increased costs.
We introduce an end-to-end SVR model for aligning 2D fMRI slices with a 3D reference volume.
Our model achieves competitive performance in terms of alignment accuracy compared to state-of-the-art deep learning-based methods.
arXiv Detail & Related papers (2024-04-06T08:02:18Z) - Thelxinoƫ: Recognizing Human Emotions Using Pupillometry and Machine Learning [0.0]
This research contributes significantly to the Thelxino"e framework, aiming to enhance VR experiences by integrating multiple sensor data for realistic and emotionally resonant touch interactions.
Our findings open new avenues for developing more immersive and interactive VR environments, paving the way for future advancements in virtual touch technology.
arXiv Detail & Related papers (2024-03-27T21:14:17Z) - Deep Motion Masking for Secure, Usable, and Scalable Real-Time Anonymization of Virtual Reality Motion Data [49.68609500290361]
Recent studies have demonstrated that the motion tracking "telemetry" data used by nearly all VR applications is as uniquely identifiable as a fingerprint scan.
We present in this paper a state-of-the-art VR identification model that can convincingly bypass known defensive countermeasures.
arXiv Detail & Related papers (2023-11-09T01:34:22Z) - Multisensory extended reality applications offer benefits for volumetric biomedical image analysis in research and medicine [2.46537907738351]
3D data from high-resolution volumetric imaging is a central resource for diagnosis and treatment in modern medicine.
Recent research used extended reality (XR) for perceiving 3D images with visual depth perception and touch but used restrictive haptic devices.
In this study, 24 experts for biomedical images in research and medicine explored 3D medical shapes with 3 applications.
arXiv Detail & Related papers (2023-11-07T13:37:47Z) - Analysis of Smooth Pursuit Assessment in Virtual Reality and Concussion
Detection using BiLSTM [0.0]
Sport-related concussion (SRC) battery relies heavily on subjective symptom reporting.
We propose a novel approach to detect SRC using long short-term memory (LSTM) recurrent neural network (RNN) architectures from oculomotor data.
arXiv Detail & Related papers (2022-10-12T16:52:31Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Learning Effect of Lay People in Gesture-Based Locomotion in Virtual
Reality [81.5101473684021]
Some of the most promising methods are gesture-based and do not require additional handheld hardware.
Recent work focused mostly on user preference and performance of the different locomotion techniques.
This work is investigated whether and how quickly users can adapt to a hand gesture-based locomotion system in VR.
arXiv Detail & Related papers (2022-06-16T10:44:16Z) - Automatic Recommendation of Strategies for Minimizing Discomfort in
Virtual Environments [58.720142291102135]
In this work, we first present a detailed review about possible causes of Cybersickness (CS)
Our system is able to suggest if the user may be entering in the next moments of the application into an illness situation.
The CSPQ (Cybersickness Profile Questionnaire) is also proposed, which is used to identify the player's susceptibility to CS.
arXiv Detail & Related papers (2020-06-27T19:28:48Z) - Scoring and Assessment in Medical VR Training Simulators with Dynamic
Time Series Classification [8.503001932363704]
This research proposes and evaluates scoring and assessment methods for Virtual Reality (VR) training simulators.
VR simulators capture detailed n-dimensional human motion data which is useful for performance analysis.
arXiv Detail & Related papers (2020-06-11T15:46:25Z) - Continuous Emotion Recognition via Deep Convolutional Autoencoder and
Support Vector Regressor [70.2226417364135]
It is crucial that the machine should be able to recognize the emotional state of the user with high accuracy.
Deep neural networks have been used with great success in recognizing emotions.
We present a new model for continuous emotion recognition based on facial expression recognition.
arXiv Detail & Related papers (2020-01-31T17:47:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.