Ocular Authentication: Fusion of Gaze and Periocular Modalities
- URL: http://arxiv.org/abs/2505.17343v2
- Date: Mon, 26 May 2025 06:34:31 GMT
- Title: Ocular Authentication: Fusion of Gaze and Periocular Modalities
- Authors: Dillon Lohr, Michael J. Proulx, Mehedi Hasan Raju, Oleg V. Komogortsev,
- Abstract summary: This paper investigates the feasibility of fusing two eye-centric authentication modalities-eye movements and periocular images-within a calibration-free authentication system.<n>We propose a multimodal authentication system and evaluate it using a large-scale in-house dataset comprising 9202 subjects with an eye tracking (ET) signal quality equivalent to a consumer-facing virtual reality (VR) device.
- Score: 10.34485500647007
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: This paper investigates the feasibility of fusing two eye-centric authentication modalities-eye movements and periocular images-within a calibration-free authentication system. While each modality has independently shown promise for user authentication, their combination within a unified gaze-estimation pipeline has not been thoroughly explored at scale. In this report, we propose a multimodal authentication system and evaluate it using a large-scale in-house dataset comprising 9202 subjects with an eye tracking (ET) signal quality equivalent to a consumer-facing virtual reality (VR) device. Our results show that the multimodal approach consistently outperforms both unimodal systems across all scenarios, surpassing the FIDO benchmark. The integration of a state-of-the-art machine learning architecture contributed significantly to the overall authentication performance at scale, driven by the model's ability to capture authentication representations and the complementary discriminative characteristics of the fused modalities.
Related papers
- Person Recognition at Altitude and Range: Fusion of Face, Body Shape and Gait [70.00430652562012]
FarSight is an end-to-end system for person recognition that integrates biometric cues across face, gait, and body shape modalities.<n>FarSight incorporates novel algorithms across four core modules: multi-subject detection and tracking, recognition-aware video restoration, modality-specific biometric feature encoding, and quality-guided multi-modal fusion.
arXiv Detail & Related papers (2025-05-07T17:58:25Z) - ECG Identity Authentication in Open-set with Multi-model Pretraining and Self-constraint Center & Irrelevant Sample Repulsion Learning [6.106335826823355]
We propose a robust ECG identity authentication system that maintains high performance even in open-set settings.<n>Our method achieves 99.83% authentication accuracy and maintains a False Accept Rate as low as 5.39% in the presence of open-set samples.
arXiv Detail & Related papers (2025-04-25T12:18:51Z) - Multi-modal biometric authentication: Leveraging shared layer architectures for enhanced security [0.0]
We introduce a novel multi-modal biometric authentication system that integrates facial, vocal, and signature data to enhance security measures.
Our model architecture incorporates dual shared layers alongside modality-specific enhancements for comprehensive feature extraction.
Our approach demonstrates significant improvements in authentication accuracy and robustness, paving the way for advanced secure identity verification solutions.
arXiv Detail & Related papers (2024-11-04T14:27:10Z) - Straight Through Gumbel Softmax Estimator based Bimodal Neural Architecture Search for Audio-Visual Deepfake Detection [6.367999777464464]
multimodal deepfake detectors rely on conventional fusion methods, such as majority rule and ensemble voting.
In this paper, we introduce the Straight-through Gumbel-Softmax framework, offering a comprehensive approach to search multimodal fusion model architectures.
Experiments on the FakeAVCeleb and SWAN-DF datasets demonstrated an impressive AUC value 94.4% achieved with minimal model parameters.
arXiv Detail & Related papers (2024-06-19T09:26:22Z) - Enhancing person re-identification via Uncertainty Feature Fusion Method and Auto-weighted Measure Combination [1.183049138259841]
Person re-identification (Re-ID) is a challenging task that involves identifying the same person across different camera views in surveillance systems.<n>In this paper, a new approach is introduced that enhances the capability of ReID models through the Uncertain Feature Fusion Method (UFFM) and Auto-weighted Measure Combination (AMC)<n>Our method significantly improves Rank@1 accuracy and Mean Average Precision (mAP) when evaluated on person re-identification datasets.
arXiv Detail & Related papers (2024-05-02T09:09:48Z) - Joint Multimodal Transformer for Emotion Recognition in the Wild [49.735299182004404]
Multimodal emotion recognition (MMER) systems typically outperform unimodal systems.
This paper proposes an MMER method that relies on a joint multimodal transformer (JMT) for fusion with key-based cross-attention.
arXiv Detail & Related papers (2024-03-15T17:23:38Z) - ViT Unified: Joint Fingerprint Recognition and Presentation Attack
Detection [36.05807963935458]
We leverage a vision transformer architecture for joint spoof detection and matching.
We report competitive results with state-of-the-art (SOTA) models for both a sequential system and a unified architecture.
We demonstrate the capability of our unified model to achieve an average integrated matching (IM) accuracy of 98.87% across LivDet 2013 and 2015 CrossMatch sensors.
arXiv Detail & Related papers (2023-05-12T16:51:14Z) - Compact multi-scale periocular recognition using SAFE features [63.48764893706088]
We present a new approach for periocular recognition based on the Symmetry Assessment by Feature Expansion (SAFE) descriptor.
We use the sclera center as single key point for feature extraction, highlighting the object-like identity properties that concentrates to this point unique of the eye.
arXiv Detail & Related papers (2022-10-18T11:46:38Z) - Multi-Modal Human Authentication Using Silhouettes, Gait and RGB [59.46083527510924]
Whole-body-based human authentication is a promising approach for remote biometrics scenarios.
We propose Dual-Modal Ensemble (DME), which combines both RGB and silhouette data to achieve more robust performances for indoor and outdoor whole-body based recognition.
Within DME, we propose GaitPattern, which is inspired by the double helical gait pattern used in traditional gait analysis.
arXiv Detail & Related papers (2022-10-08T15:17:32Z) - Trusted Multi-View Classification with Dynamic Evidential Fusion [73.35990456162745]
We propose a novel multi-view classification algorithm, termed trusted multi-view classification (TMC)
TMC provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
Both theoretical and experimental results validate the effectiveness of the proposed model in accuracy, robustness and trustworthiness.
arXiv Detail & Related papers (2022-04-25T03:48:49Z) - Trusted Multi-View Classification [76.73585034192894]
We propose a novel multi-view classification method, termed trusted multi-view classification.
It provides a new paradigm for multi-view learning by dynamically integrating different views at an evidence level.
The proposed algorithm jointly utilizes multiple views to promote both classification reliability and robustness.
arXiv Detail & Related papers (2021-02-03T13:30:26Z) - On Benchmarking Iris Recognition within a Head-mounted Display for AR/VR
Application [16.382021536377437]
We evaluate a set of iris recognition algorithms suitable for Head-Mounted Displays (HMD)
We employ and adapt a recently developed miniature segmentation model (EyeMMS) for segmenting the iris.
Motivated by the performance of iris recognition, we also propose the continuous authentication of users in a non-collaborative capture setting in HMD.
arXiv Detail & Related papers (2020-10-20T17:05:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.