Evaluating the long-term viability of eye-tracking for continuous authentication in virtual reality
- URL: http://arxiv.org/abs/2502.20359v1
- Date: Thu, 27 Feb 2025 18:32:13 GMT
- Title: Evaluating the long-term viability of eye-tracking for continuous authentication in virtual reality
- Authors: Sai Ganesh Grandhi, Saeed Samet,
- Abstract summary: This study investigates the long-term feasibility of eye-tracking as a behavioral biometric for continuous authentication in virtual reality (VR) environments.<n>Our approach evaluates three architectures, Transformer, DenseNet, and XGBoost, on short and long-term data to determine their efficacy in user identification tasks.<n>Initial results indicate that both Transformer and DenseNet models achieve high accuracy rates of up to 97% in short-term settings.<n>When tested on data collected 26 months later, model accuracy declined significantly, with rates as low as 1.78% for some tasks.
- Score: 0.8192907805418583
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Traditional authentication methods, such as passwords and biometrics, verify a user's identity only at the start of a session, leaving systems vulnerable to session hijacking. Continuous authentication, however, ensures ongoing verification by monitoring user behavior. This study investigates the long-term feasibility of eye-tracking as a behavioral biometric for continuous authentication in virtual reality (VR) environments, using data from the GazebaseVR dataset. Our approach evaluates three architectures, Transformer Encoder, DenseNet, and XGBoost, on short and long-term data to determine their efficacy in user identification tasks. Initial results indicate that both Transformer Encoder and DenseNet models achieve high accuracy rates of up to 97% in short-term settings, effectively capturing unique gaze patterns. However, when tested on data collected 26 months later, model accuracy declined significantly, with rates as low as 1.78% for some tasks. To address this, we propose periodic model updates incorporating recent data, restoring accuracy to over 95%. These findings highlight the adaptability required for gaze-based continuous authentication systems and underscore the need for model retraining to manage evolving user behavior. Our study provides insights into the efficacy and limitations of eye-tracking as a biometric for VR authentication, paving the way for adaptive, secure VR user experiences.
Related papers
- KDPrint: Passive Authentication using Keystroke Dynamics-to-Image Encoding via Standardization [7.251941112707364]
This paper proposes a passive authentication system that utilizes keystroke data, a byproduct of primary authentication methods, for background user authentication.
We introduce a novel image encoding technique to capture the temporal dynamics of keystroke data, overcoming the performance limitations of deep learning models.
Experimental results demonstrate that the proposed imaging approach surpasses existing methods in terms of information capacity.
arXiv Detail & Related papers (2024-05-02T08:18:37Z) - Privacy-Preserving Gaze Data Streaming in Immersive Interactive Virtual Reality: Robustness and User Experience [11.130411904676095]
Eye tracking data, if exposed, can be used for re-identification attacks.
We develop a methodology to evaluate real-time privacy mechanisms for interactive VR applications.
arXiv Detail & Related papers (2024-02-12T14:53:12Z) - Using Motion Forecasting for Behavior-Based Virtual Reality (VR)
Authentication [8.552737863305213]
We present the first approach that predicts future user behavior using Transformer-based forecasting and using the forecasted trajectory to perform user authentication.
Our approach reduces the authentication equal error rate (EER) by an average of 23.85% and a maximum reduction of 36.14%.
arXiv Detail & Related papers (2024-01-30T00:43:41Z) - Cluster-level pseudo-labelling for source-free cross-domain facial
expression recognition [94.56304526014875]
We propose the first Source-Free Unsupervised Domain Adaptation (SFUDA) method for Facial Expression Recognition (FER)
Our method exploits self-supervised pretraining to learn good feature representations from the target data.
We validate the effectiveness of our method in four adaptation setups, proving that it consistently outperforms existing SFUDA methods when applied to FER.
arXiv Detail & Related papers (2022-10-11T08:24:50Z) - Evaluation of a User Authentication Schema Using Behavioral Biometrics
and Machine Learning [0.0]
This study contributes to the research being done on behavioral biometrics by creating and evaluating a user authentication scheme using behavioral biometrics.
The behavioral biometrics used in this study include touch dynamics and phone movement.
We evaluate the performance of different single-modal and multi-modal combinations of the two biometrics.
arXiv Detail & Related papers (2022-05-07T05:16:34Z) - RealGait: Gait Recognition for Person Re-Identification [79.67088297584762]
We construct a new gait dataset by extracting silhouettes from an existing video person re-identification challenge which consists of 1,404 persons walking in an unconstrained manner.
Our results suggest that recognizing people by their gait in real surveillance scenarios is feasible and the underlying gait pattern is probably the true reason why video person re-idenfification works in practice.
arXiv Detail & Related papers (2022-01-13T06:30:56Z) - Lifelong Unsupervised Domain Adaptive Person Re-identification with
Coordinated Anti-forgetting and Adaptation [127.6168183074427]
We propose a new task, Lifelong Unsupervised Domain Adaptive (LUDA) person ReID.
This is challenging because it requires the model to continuously adapt to unlabeled data of the target environments.
We design an effective scheme for this task, dubbed CLUDA-ReID, where the anti-forgetting is harmoniously coordinated with the adaptation.
arXiv Detail & Related papers (2021-12-13T13:19:45Z) - Data-driven behavioural biometrics for continuous and adaptive user
verification using Smartphone and Smartwatch [0.0]
We propose an algorithm to blend behavioural biometrics with multi-factor authentication (MFA)
This work proposes a two-step user verification algorithm that verifies the user's identity using motion-based biometrics.
arXiv Detail & Related papers (2021-10-07T02:46:21Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z) - Towards End-to-end Video-based Eye-Tracking [50.0630362419371]
Estimating eye-gaze from images alone is a challenging task due to un-observable person-specific factors.
We propose a novel dataset and accompanying method which aims to explicitly learn these semantic and temporal relationships.
We demonstrate that the fusion of information from visual stimuli as well as eye images can lead towards achieving performance similar to literature-reported figures.
arXiv Detail & Related papers (2020-07-26T12:39:15Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.