Beyond Gaze Points: Augmenting Eye Movement with Brainwave Data for Multimodal User Authentication in Extended Reality
- URL: http://arxiv.org/abs/2404.18694v1
- Date: Mon, 29 Apr 2024 13:42:55 GMT
- Title: Beyond Gaze Points: Augmenting Eye Movement with Brainwave Data for Multimodal User Authentication in Extended Reality
- Authors: Matin Fallahi, Patricia Arias-Cabarcos, Thorsten Strufe,
- Abstract summary: We introduce a multimodal biometric authentication system that combines eye movement and brainwave patterns.
Our system yields an excellent Equal Error Rate (EER) of 0.298%, which means an 83.6% reduction in EER compared to the single eye movement modality.
- Score: 4.114205202954365
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The increasing adoption of Extended Reality (XR) in various applications underscores the need for secure and user-friendly authentication methods. However, existing methods can disrupt the immersive experience in XR settings, or suffer from higher false acceptance rates. In this paper, we introduce a multimodal biometric authentication system that combines eye movement and brainwave patterns, as captured by consumer-grade low-fidelity sensors. Our multimodal authentication exploits the non-invasive and hands-free properties of eye movement and brainwaves to provide a seamless XR user experience and enhanced security as well. Using synchronized eye and brainwave data collected from 30 participants through consumer-grade devices, we investigated whether twin neural networks can utilize these biometrics for identity verification. Our multimodal authentication system yields an excellent Equal Error Rate (EER) of 0.298\%, which means an 83.6\% reduction in EER compared to the single eye movement modality or a 93.9\% reduction in EER compared to the single brainwave modality.
Related papers
- Bridging the Gap Between End-to-End and Two-Step Text Spotting [88.14552991115207]
Bridging Text Spotting is a novel approach that resolves the error accumulation and suboptimal performance issues in two-step methods.
We demonstrate the effectiveness of the proposed method through extensive experiments.
arXiv Detail & Related papers (2024-04-06T13:14:04Z) - On the Usability of Next-Generation Authentication: A Study on Eye Movement and Brainwave-based Mechanisms [4.114205202954365]
Next-generation authentication mechanisms based on behavioral biometric factors such as eye movement and brainwave have emerged.
Our findings show good overall usability according to the System Usability Scale for both categories of mechanisms.
We identify three key areas for improvement: privacy, authentication interface design, and verification time.
arXiv Detail & Related papers (2024-02-23T15:34:43Z) - NeuroIDBench: An Open-Source Benchmark Framework for the Standardization of Methodology in Brainwave-based Authentication Research [4.9286860173040825]
Biometric systems based on brain activity have been proposed as an alternative to passwords or to complement current authentication techniques.
NeuroIDBench is a flexible open source tool to benchmark brainwave-based authentication models.
arXiv Detail & Related papers (2024-02-13T18:38:18Z) - IdentiFace : A VGG Based Multimodal Facial Biometric System [0.0]
"IdentiFace" is a multimodal facial biometric system that combines the core of facial recognition with some of the most important soft biometric traits such as gender, face shape, and emotion.
For the recognition problem, we acquired a 99.2% test accuracy for five classes with high intra-class variations using data collected from the FERET database.
We were also able to achieve a testing accuracy of 88.03% in the face-shape problem using the celebrity face-shape dataset.
arXiv Detail & Related papers (2024-01-02T14:36:28Z) - Log-Likelihood Score Level Fusion for Improved Cross-Sensor Smartphone
Periocular Recognition [52.15994166413364]
We employ fusion of several comparators to improve periocular performance when images from different smartphones are compared.
We use a probabilistic fusion framework based on linear logistic regression, in which fused scores tend to be log-likelihood ratios.
Our framework also provides an elegant and simple solution to handle signals from different devices, since same-sensor and cross-sensor score distributions are aligned and mapped to a common probabilistic domain.
arXiv Detail & Related papers (2023-11-02T13:43:44Z) - Agile gesture recognition for capacitive sensing devices: adapting
on-the-job [55.40855017016652]
We demonstrate a hand gesture recognition system that uses signals from capacitive sensors embedded into the etee hand controller.
The controller generates real-time signals from each of the wearer five fingers.
We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms.
arXiv Detail & Related papers (2023-05-12T17:24:02Z) - Mobile Behavioral Biometrics for Passive Authentication [65.94403066225384]
This work carries out a comparative analysis of unimodal and multimodal behavioral biometric traits.
Experiments are performed over HuMIdb, one of the largest and most comprehensive freely available mobile user interaction databases.
In our experiments, the most discriminative background sensor is the magnetometer, whereas among touch tasks the best results are achieved with keystroke.
arXiv Detail & Related papers (2022-03-14T17:05:59Z) - Bayesian Imitation Learning for End-to-End Mobile Manipulation [80.47771322489422]
Augmenting policies with additional sensor inputs, such as RGB + depth cameras, is a straightforward approach to improving robot perception capabilities.
We show that using the Variational Information Bottleneck to regularize convolutional neural networks improves generalization to held-out domains.
We demonstrate that our method is able to help close the sim-to-real gap and successfully fuse RGB and depth modalities.
arXiv Detail & Related papers (2022-02-15T17:38:30Z) - Benchmarking Quality-Dependent and Cost-Sensitive Score-Level Multimodal
Biometric Fusion Algorithms [58.156733807470395]
This paper reports a benchmarking study carried out within the framework of the BioSecure DS2 (Access Control) evaluation campaign.
The campaign targeted the application of physical access control in a medium-size establishment with some 500 persons.
To the best of our knowledge, this is the first attempt to benchmark quality-based multimodal fusion algorithms.
arXiv Detail & Related papers (2021-11-17T13:39:48Z) - Opportunistic Implicit User Authentication for Health-Tracking IoT
Wearables [1.8352113484137629]
We explore the usefulness of blood oxygen saturation SpO2 values collected from the Oximeter device to distinguish a user from others.
From a cohort of 25 subjects, we find that 92% of the cases SpO2 can distinguish pairs of users.
These results show promise in using SpO2 along with other biometrics to develop implicit continuous authentications for wearables.
arXiv Detail & Related papers (2021-09-28T13:18:36Z) - Context-Dependent Implicit Authentication for Wearable Device User [1.827510863075184]
We present a context-dependent soft-biometric-based wearable authentication system utilizing the heart rate, gait, and breathing audio signals.
From our detailed analysis, we find that a binary support vector machine (SVM) with radial basis function (RBF) kernel can achieve an average accuracy of 92.84%.
arXiv Detail & Related papers (2020-08-25T04:34:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.