Beyond Gaze Points: Augmenting Eye Movement with Brainwave Data for Multimodal User Authentication in Extended Reality
- URL: http://arxiv.org/abs/2404.18694v2
- Date: Tue, 21 Jan 2025 11:43:29 GMT
- Title: Beyond Gaze Points: Augmenting Eye Movement with Brainwave Data for Multimodal User Authentication in Extended Reality
- Authors: Matin Fallahi, Patricia Arias-Cabarcos, Thorsten Strufe,
- Abstract summary: We introduce a multimodal biometric authentication system that combines eye movements and brainwave patterns.
Our prototype, developed and evaluated with 30 participants, achieves an Equal Error Rate (EER) of 0.29%.
This system enables seamless authentication through visual stimuli without complex interaction.
- Score: 4.114205202954365
- License:
- Abstract: Extended Reality (XR) technologies are becoming integral to daily life. However, password-based authentication in XR disrupts immersion due to poor usability, as entering credentials with XR controllers is cumbersome and error-prone. This leads users to choose weaker passwords, compromising security. To improve both usability and security, we introduce a multimodal biometric authentication system that combines eye movements and brainwave patterns using consumer-grade sensors that can be integrated into XR devices. Our prototype, developed and evaluated with 30 participants, achieves an Equal Error Rate (EER) of 0.29%, outperforming eye movement (1.82%) and brainwave (4.92%) modalities alone, as well as state-of-the-art biometric alternatives (EERs between 2.5% and 7%). Furthermore, this system enables seamless authentication through visual stimuli without complex interaction.
Related papers
- Bridging the Gap Between End-to-End and Two-Step Text Spotting [88.14552991115207]
Bridging Text Spotting is a novel approach that resolves the error accumulation and suboptimal performance issues in two-step methods.
We demonstrate the effectiveness of the proposed method through extensive experiments.
arXiv Detail & Related papers (2024-04-06T13:14:04Z) - On the Usability of Next-Generation Authentication: A Study on Eye Movement and Brainwave-based Mechanisms [4.114205202954365]
Next-generation authentication mechanisms based on behavioral biometric factors such as eye movement and brainwave have emerged.
Our findings show good overall usability according to the System Usability Scale for both categories of mechanisms.
We identify three key areas for improvement: privacy, authentication interface design, and verification time.
arXiv Detail & Related papers (2024-02-23T15:34:43Z) - NeuroIDBench: An Open-Source Benchmark Framework for the Standardization of Methodology in Brainwave-based Authentication Research [4.9286860173040825]
Biometric systems based on brain activity have been proposed as an alternative to passwords or to complement current authentication techniques.
NeuroIDBench is a flexible open source tool to benchmark brainwave-based authentication models.
arXiv Detail & Related papers (2024-02-13T18:38:18Z) - IdentiFace : A VGG Based Multimodal Facial Biometric System [0.0]
"IdentiFace" is a multimodal facial biometric system that combines the core of facial recognition with some of the most important soft biometric traits such as gender, face shape, and emotion.
For the recognition problem, we acquired a 99.2% test accuracy for five classes with high intra-class variations using data collected from the FERET database.
We were also able to achieve a testing accuracy of 88.03% in the face-shape problem using the celebrity face-shape dataset.
arXiv Detail & Related papers (2024-01-02T14:36:28Z) - Log-Likelihood Score Level Fusion for Improved Cross-Sensor Smartphone
Periocular Recognition [52.15994166413364]
We employ fusion of several comparators to improve periocular performance when images from different smartphones are compared.
We use a probabilistic fusion framework based on linear logistic regression, in which fused scores tend to be log-likelihood ratios.
Our framework also provides an elegant and simple solution to handle signals from different devices, since same-sensor and cross-sensor score distributions are aligned and mapped to a common probabilistic domain.
arXiv Detail & Related papers (2023-11-02T13:43:44Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Mobile Behavioral Biometrics for Passive Authentication [65.94403066225384]
This work carries out a comparative analysis of unimodal and multimodal behavioral biometric traits.
Experiments are performed over HuMIdb, one of the largest and most comprehensive freely available mobile user interaction databases.
In our experiments, the most discriminative background sensor is the magnetometer, whereas among touch tasks the best results are achieved with keystroke.
arXiv Detail & Related papers (2022-03-14T17:05:59Z) - Bayesian Imitation Learning for End-to-End Mobile Manipulation [80.47771322489422]
Augmenting policies with additional sensor inputs, such as RGB + depth cameras, is a straightforward approach to improving robot perception capabilities.
We show that using the Variational Information Bottleneck to regularize convolutional neural networks improves generalization to held-out domains.
We demonstrate that our method is able to help close the sim-to-real gap and successfully fuse RGB and depth modalities.
arXiv Detail & Related papers (2022-02-15T17:38:30Z) - Benchmarking Quality-Dependent and Cost-Sensitive Score-Level Multimodal
Biometric Fusion Algorithms [58.156733807470395]
This paper reports a benchmarking study carried out within the framework of the BioSecure DS2 (Access Control) evaluation campaign.
The campaign targeted the application of physical access control in a medium-size establishment with some 500 persons.
To the best of our knowledge, this is the first attempt to benchmark quality-based multimodal fusion algorithms.
arXiv Detail & Related papers (2021-11-17T13:39:48Z) - Opportunistic Implicit User Authentication for Health-Tracking IoT
Wearables [1.8352113484137629]
We explore the usefulness of blood oxygen saturation SpO2 values collected from the Oximeter device to distinguish a user from others.
From a cohort of 25 subjects, we find that 92% of the cases SpO2 can distinguish pairs of users.
These results show promise in using SpO2 along with other biometrics to develop implicit continuous authentications for wearables.
arXiv Detail & Related papers (2021-09-28T13:18:36Z) - Context-Dependent Implicit Authentication for Wearable Device User [1.827510863075184]
We present a context-dependent soft-biometric-based wearable authentication system utilizing the heart rate, gait, and breathing audio signals.
From our detailed analysis, we find that a binary support vector machine (SVM) with radial basis function (RBF) kernel can achieve an average accuracy of 92.84%.
arXiv Detail & Related papers (2020-08-25T04:34:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.