A Brain-Computer Interface Augmented Reality Framework with
Auto-Adaptive SSVEP Recognition
- URL: http://arxiv.org/abs/2308.06401v1
- Date: Fri, 11 Aug 2023 21:56:00 GMT
- Title: A Brain-Computer Interface Augmented Reality Framework with
Auto-Adaptive SSVEP Recognition
- Authors: Yasmine Mustafa, Mohamed Elmahallawy, Tie Luo, Seif Eldawlatly
- Abstract summary: We propose a simple adaptive ensemble classification system that handles the inter-subject variability.
We also present a simple BCI-AR framework that supports the development of a wide range of SSVEP-based BCI-AR applications.
- Score: 1.1674893622721483
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Brain-Computer Interface (BCI) initially gained attention for developing
applications that aid physically impaired individuals. Recently, the idea of
integrating BCI with Augmented Reality (AR) emerged, which uses BCI not only to
enhance the quality of life for individuals with disabilities but also to
develop mainstream applications for healthy users. One commonly used BCI signal
pattern is the Steady-state Visually-evoked Potential (SSVEP), which captures
the brain's response to flickering visual stimuli. SSVEP-based BCI-AR
applications enable users to express their needs/wants by simply looking at
corresponding command options. However, individuals are different in brain
signals and thus require per-subject SSVEP recognition. Moreover, muscle
movements and eye blinks interfere with brain signals, and thus subjects are
required to remain still during BCI experiments, which limits AR engagement. In
this paper, we (1) propose a simple adaptive ensemble classification system
that handles the inter-subject variability, (2) present a simple BCI-AR
framework that supports the development of a wide range of SSVEP-based BCI-AR
applications, and (3) evaluate the performance of our ensemble algorithm in an
SSVEP-based BCI-AR application with head rotations which has demonstrated
robustness to the movement interference. Our testing on multiple subjects
achieved a mean accuracy of 80\% on a PC and 77\% using the HoloLens AR
headset, both of which surpass previous studies that incorporate individual
classifiers and head movements. In addition, our visual stimulation time is 5
seconds which is relatively short. The statistically significant results show
that our ensemble classification approach outperforms individual classifiers in
SSVEP-based BCIs.
Related papers
- Modeling User Preferences via Brain-Computer Interfacing [54.3727087164445]
We use Brain-Computer Interfacing technology to infer users' preferences, their attentional correlates towards visual content, and their associations with affective experience.
We link these to relevant applications, such as information retrieval, personalized steering of generative models, and crowdsourcing population estimates of affective experiences.
arXiv Detail & Related papers (2024-05-15T20:41:46Z) - Egocentric RGB+Depth Action Recognition in Industry-Like Settings [50.38638300332429]
Our work focuses on recognizing actions from egocentric RGB and Depth modalities in an industry-like environment.
Our framework is based on the 3D Video SWIN Transformer to encode both RGB and Depth modalities effectively.
Our method also secured first place at the multimodal action recognition challenge at ICIAP 2023.
arXiv Detail & Related papers (2023-09-25T08:56:22Z) - A Human-Machine Joint Learning Framework to Boost Endogenous BCI
Training [20.2015819836196]
Endogenous brain-computer interfaces (BCIs) provide a direct pathway from the brain to external devices.
mastering spontaneous BCI control requires the users to generate discriminative and stable brain signal patterns by imagery.
Here, we propose a human-machine joint learning framework to boost the learning process in endogenous BCIs.
arXiv Detail & Related papers (2023-08-25T01:24:18Z) - Bayesian Inference on Brain-Computer Interfaces via GLASS [4.04514704204904]
Low signal-to-noise ratio (SNR) and complex spatial/temporal correlations of EEG signals present challenges in modeling and computation.
We introduce a novel Gaussian Latent channel model with Sparse time-varying effects (GLASS) under a fully Bayesian framework.
We demonstrate GLASS substantially improves BCI's performance in participants with amyotrophic lateral sclerosis (ALS)
For broader accessibility, we develop an efficient gradient-based variational inference (GBVI) algorithm for posterior computation.
arXiv Detail & Related papers (2023-04-14T21:29:00Z) - Learning Common Rationale to Improve Self-Supervised Representation for
Fine-Grained Visual Recognition Problems [61.11799513362704]
We propose learning an additional screening mechanism to identify discriminative clues commonly seen across instances and classes.
We show that a common rationale detector can be learned by simply exploiting the GradCAM induced from the SSL objective.
arXiv Detail & Related papers (2023-03-03T02:07:40Z) - Adaptive Local-Component-aware Graph Convolutional Network for One-shot
Skeleton-based Action Recognition [54.23513799338309]
We present an Adaptive Local-Component-aware Graph Convolutional Network for skeleton-based action recognition.
Our method provides a stronger representation than the global embedding and helps our model reach state-of-the-art.
arXiv Detail & Related papers (2022-09-21T02:33:07Z) - Motor-Imagery-Based Brain Computer Interface using Signal Derivation and
Aggregation Functions [23.995027642929756]
We propose a BCI Framework, named Enhanced Fusion Framework, to improve the existing MI-based BCI frameworks.
Firstly, we include aan additional pre-processing step of the signal: a differentiation of the EEG signal that makes it time-invariant.
Secondly, we add an additional frequency band as feature for the system and we show its effect on the performance of the system.
We have tested this new system on a dataset of 20 volunteers performing motor imagery-based brain-computer interface experiments.
arXiv Detail & Related papers (2021-01-18T10:14:01Z) - Toward Real-World BCI: CCSPNet, A Compact Subject-Independent Motor
Imagery Framework [2.0741711594051377]
A conventional brain-computer interface (BCI) requires a complete data gathering, training, and calibration phase for each user before it can be used.
We propose a novel subject-independent BCI framework named CCSPNet that is trained on the motor imagery (MI) paradigm of a large-scale EEG signals database.
The proposed framework applies a wavelet kernel convolutional neural network (WKCNN) and a temporal convolutional neural network (TCNN) in order to represent and extract the diverse spectral features of EEG signals.
arXiv Detail & Related papers (2020-12-25T12:00:47Z) - Performance of Dual-Augmented Lagrangian Method and Common Spatial
Patterns applied in classification of Motor-Imagery BCI [68.8204255655161]
Motor-imagery based brain-computer interfaces (MI-BCI) have the potential to become ground-breaking technologies for neurorehabilitation.
Due to the noisy nature of the used EEG signal, reliable BCI systems require specialized procedures for features optimization and extraction.
arXiv Detail & Related papers (2020-10-13T20:50:13Z) - Symbiotic Adversarial Learning for Attribute-based Person Search [86.7506832053208]
We present a symbiotic adversarial learning framework, called SAL.Two GANs sit at the base of the framework in a symbiotic learning scheme.
Specifically, two different types of generative adversarial networks learn collaboratively throughout the training process.
arXiv Detail & Related papers (2020-07-19T07:24:45Z) - Few-Shot Relation Learning with Attention for EEG-based Motor Imagery
Classification [11.873435088539459]
Brain-Computer Interfaces (BCI) based on Electroencephalography (EEG) signals have received a lot of attention.
Motor imagery (MI) data can be used to aid rehabilitation as well as in autonomous driving scenarios.
classification of MI signals is vital for EEG-based BCI systems.
arXiv Detail & Related papers (2020-03-03T02:34:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.