Real or Virtual? Using Brain Activity Patterns to differentiate Attended
Targets during Augmented Reality Scenarios
- URL: http://arxiv.org/abs/2101.05272v1
- Date: Tue, 12 Jan 2021 19:08:39 GMT
- Title: Real or Virtual? Using Brain Activity Patterns to differentiate Attended
Targets during Augmented Reality Scenarios
- Authors: Lisa-Marie Vortmann, Leonid Schwenke, Felix Putze
- Abstract summary: We use machine learning techniques to classify electroencephalographic (EEG) data collected in Augmented Reality scenarios.
A shallow convolutional neural net classified 3 second data windows from 20 participants in a person-dependent manner.
- Score: 10.739605873338592
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Augmented Reality is the fusion of virtual components and our real
surroundings. The simultaneous visibility of generated and natural objects
often requires users to direct their selective attention to a specific target
that is either real or virtual. In this study, we investigated whether this
target is real or virtual by using machine learning techniques to classify
electroencephalographic (EEG) data collected in Augmented Reality scenarios. A
shallow convolutional neural net classified 3 second data windows from 20
participants in a person-dependent manner with an average accuracy above 70\%
if the testing data and training data came from different trials.
Person-independent classification was possible above chance level for 6 out of
20 participants. Thus, the reliability of such a Brain-Computer Interface is
high enough for it to be treated as a useful input mechanism for Augmented
Reality applications.
Related papers
- Behavioural gap assessment of human-vehicle interaction in real and virtual reality-based scenarios in autonomous driving [7.588679613436823]
We present a first and innovative approach to evaluating what we term the behavioural gap, a concept that captures the disparity in a participant's conduct when engaging in a VR experiment compared to an equivalent real-world situation.
In the experiment, the pedestrian attempts to cross the road in the presence of different driving styles and an external Human-Machine Interface (eHMI)
Results show that participants are more cautious and curious in VR, affecting their speed and decisions, and that VR interfaces significantly influence their actions.
arXiv Detail & Related papers (2024-07-04T17:20:17Z) - Thelxinoƫ: Recognizing Human Emotions Using Pupillometry and Machine Learning [0.0]
This research contributes significantly to the Thelxino"e framework, aiming to enhance VR experiences by integrating multiple sensor data for realistic and emotionally resonant touch interactions.
Our findings open new avenues for developing more immersive and interactive VR environments, paving the way for future advancements in virtual touch technology.
arXiv Detail & Related papers (2024-03-27T21:14:17Z) - Systematic Adaptation of Communication-focused Machine Learning Models
from Real to Virtual Environments for Human-Robot Collaboration [1.392250707100996]
This paper presents a systematic framework for the real to virtual adaptation using limited size of virtual dataset.
Hand gestures recognition which has been a topic of much research and subsequent commercialization in the real world has been possible because of the creation of large, labelled datasets.
arXiv Detail & Related papers (2023-07-21T03:24:55Z) - A Virtual Reality Tool for Representing, Visualizing and Updating Deep
Learning Models [1.9785872350085878]
We demonstrate a virtual reality tool for automating the process of assigning data inputs to different categories.
A dataset is represented as a cloud of points in virtual space.
The user explores the cloud through movement and uses hand gestures to categorise portions of the cloud.
arXiv Detail & Related papers (2023-05-24T17:06:59Z) - ArK: Augmented Reality with Knowledge Interactive Emergent Ability [115.72679420999535]
We develop an infinite agent that learns to transfer knowledge memory from general foundation models to novel domains.
The heart of our approach is an emerging mechanism, dubbed Augmented Reality with Knowledge Inference Interaction (ArK)
We show that our ArK approach, combined with large foundation models, significantly improves the quality of generated 2D/3D scenes.
arXiv Detail & Related papers (2023-05-01T17:57:01Z) - Unique Identification of 50,000+ Virtual Reality Users from Head & Hand
Motion Data [58.27542320038834]
We show that a large number of real VR users can be uniquely and reliably identified across multiple sessions using just their head and hand motion.
After training a classification model on 5 minutes of data per person, a user can be uniquely identified amongst the entire pool of 50,000+ with 94.33% accuracy from 100 seconds of motion.
This work is the first to truly demonstrate the extent to which biomechanics may serve as a unique identifier in VR, on par with widely used biometrics such as facial or fingerprint recognition.
arXiv Detail & Related papers (2023-02-17T15:05:18Z) - Force-Aware Interface via Electromyography for Natural VR/AR Interaction [69.1332992637271]
We design a learning-based neural interface for natural and intuitive force inputs in VR/AR.
We show that our interface can decode finger-wise forces in real-time with 3.3% mean error, and generalize to new users with little calibration.
We envision our findings to push forward research towards more realistic physicality in future VR/AR.
arXiv Detail & Related papers (2022-10-03T20:51:25Z) - Towards Scale Consistent Monocular Visual Odometry by Learning from the
Virtual World [83.36195426897768]
We propose VRVO, a novel framework for retrieving the absolute scale from virtual data.
We first train a scale-aware disparity network using both monocular real images and stereo virtual data.
The resulting scale-consistent disparities are then integrated with a direct VO system.
arXiv Detail & Related papers (2022-03-11T01:51:54Z) - Towards Optimal Strategies for Training Self-Driving Perception Models
in Simulation [98.51313127382937]
We focus on the use of labels in the synthetic domain alone.
Our approach introduces both a way to learn neural-invariant representations and a theoretically inspired view on how to sample the data from the simulator.
We showcase our approach on the bird's-eye-view vehicle segmentation task with multi-sensor data.
arXiv Detail & Related papers (2021-11-15T18:37:43Z) - Attention-based Adversarial Appearance Learning of Augmented Pedestrians [49.25430012369125]
We propose a method to synthesize realistic data for the pedestrian recognition task.
Our approach utilizes an attention mechanism driven by an adversarial loss to learn domain discrepancies.
Our experiments confirm that the proposed adaptation method is robust to such discrepancies and reveals both visual realism and semantic consistency.
arXiv Detail & Related papers (2021-07-06T15:27:00Z) - Facial Expression Recognition Under Partial Occlusion from Virtual
Reality Headsets based on Transfer Learning [0.0]
convolutional neural network based approaches has become widely adopted due to their proven applicability to Facial Expression Recognition task.
However, recognizing facial expression while wearing a head-mounted VR headset is a challenging task due to the upper half of the face being completely occluded.
We propose a geometric model to simulate occlusion resulting from a Samsung Gear VR headset that can be applied to existing FER datasets.
arXiv Detail & Related papers (2020-08-12T20:25:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.