Eye Know You: Metric Learning for End-to-end Biometric Authentication
Using Eye Movements from a Longitudinal Dataset
- URL: http://arxiv.org/abs/2104.10489v1
- Date: Wed, 21 Apr 2021 12:21:28 GMT
- Title: Eye Know You: Metric Learning for End-to-end Biometric Authentication
Using Eye Movements from a Longitudinal Dataset
- Authors: Dillon Lohr, Henry Griffith, and Oleg V Komogortsev
- Abstract summary: This paper presents a convolutional neural network for authenticating users using their eye movements.
The network is trained with an established metric learning loss function, multi-similarity loss.
We find that eye movements are quite resilient against template aging after 3 years.
- Score: 4.511561231517167
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: While numerous studies have explored eye movement biometrics since the
modality's inception in 2004, the permanence of eye movements remains largely
unexplored as most studies utilize datasets collected within a short time
frame. This paper presents a convolutional neural network for authenticating
users using their eye movements. The network is trained with an established
metric learning loss function, multi-similarity loss, which seeks to form a
well-clustered embedding space and directly enables the enrollment and
authentication of out-of-sample users. Performance measures are computed on
GazeBase, a task-diverse and publicly-available dataset collected over a
37-month period. This study includes an exhaustive analysis of the effects of
training on various tasks and downsampling from 1000 Hz to several lower
sampling rates. Our results reveal that reasonable authentication accuracy may
be achieved even during a low-cognitive-load task or at low sampling rates.
Moreover, we find that eye movements are quite resilient against template aging
after 3 years.
Related papers
- EyeTrAES: Fine-grained, Low-Latency Eye Tracking via Adaptive Event Slicing [2.9795443606634917]
EyeTrAES is a novel approach using neuromorphic event cameras for high-fidelity tracking of natural pupillary movement.
We show that EyeTrAES boosts pupil tracking fidelity by 6+%, achieving IoU=92%, while incurring at least 3x lower latency than competing pure event-based eye tracking alternatives.
For robust user authentication, we train a lightweight per-user Random Forest classifier using a novel feature vector of short-term pupillary kinematics.
arXiv Detail & Related papers (2024-09-27T15:06:05Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - When Measures are Unreliable: Imperceptible Adversarial Perturbations
toward Top-$k$ Multi-Label Learning [83.8758881342346]
A novel loss function is devised to generate adversarial perturbations that could achieve both visual and measure imperceptibility.
Experiments on large-scale benchmark datasets demonstrate the superiority of our proposed method in attacking the top-$k$ multi-label systems.
arXiv Detail & Related papers (2023-07-27T13:18:47Z) - NEVIS'22: A Stream of 100 Tasks Sampled from 30 Years of Computer Vision
Research [96.53307645791179]
We introduce the Never-Ending VIsual-classification Stream (NEVIS'22), a benchmark consisting of a stream of over 100 visual classification tasks.
Despite being limited to classification, the resulting stream has a rich diversity of tasks from OCR, to texture analysis, scene recognition, and so forth.
Overall, NEVIS'22 poses an unprecedented challenge for current sequential learning approaches due to the scale and diversity of tasks.
arXiv Detail & Related papers (2022-11-15T18:57:46Z) - Measuring Human Perception to Improve Open Set Recognition [4.124573231232705]
Human ability to recognize when an object belongs or does not belong to a particular vision task outperforms all open set recognition algorithms.
measured reaction time from human subjects can offer insight as to whether a class sample is prone to be confused with a different class.
New psychophysical loss function enforces consistency with human behavior in deep networks which exhibit variable reaction time for different images.
arXiv Detail & Related papers (2022-09-08T01:19:36Z) - LifeLonger: A Benchmark for Continual Disease Classification [59.13735398630546]
We introduce LifeLonger, a benchmark for continual disease classification on the MedMNIST collection.
Task and class incremental learning of diseases address the issue of classifying new samples without re-training the models from scratch.
Cross-domain incremental learning addresses the issue of dealing with datasets originating from different institutions while retaining the previously obtained knowledge.
arXiv Detail & Related papers (2022-04-12T12:25:05Z) - CLRGaze: Contrastive Learning of Representations for Eye Movement
Signals [0.0]
We learn feature vectors of eye movements in a self-supervised manner.
We adopt a contrastive learning approach and propose a set of data transformations that encourage a deep neural network to discern salient and granular gaze patterns.
arXiv Detail & Related papers (2020-10-25T06:12:06Z) - Object Tracking through Residual and Dense LSTMs [67.98948222599849]
Deep learning-based trackers based on LSTMs (Long Short-Term Memory) recurrent neural networks have emerged as a powerful alternative.
DenseLSTMs outperform Residual and regular LSTM, and offer a higher resilience to nuisances.
Our case study supports the adoption of residual-based RNNs for enhancing the robustness of other trackers.
arXiv Detail & Related papers (2020-06-22T08:20:17Z) - Omni-supervised Facial Expression Recognition via Distilled Data [120.11782405714234]
We propose omni-supervised learning to exploit reliable samples in a large amount of unlabeled data for network training.
We experimentally verify that the new dataset can significantly improve the ability of the learned FER model.
To tackle this, we propose to apply a dataset distillation strategy to compress the created dataset into several informative class-wise images.
arXiv Detail & Related papers (2020-05-18T09:36:51Z) - 3D Human Pose Estimation using Spatio-Temporal Networks with Explicit
Occlusion Training [40.933783830017035]
Estimating 3D poses from a monocular task is still a challenging task, despite the significant progress that has been made in recent years.
We introduce a-temporal video network for robust 3D human pose estimation.
We apply multi-scale spatial features for 2D joints or keypoints prediction in each individual frame, and multistride temporal convolutional net-works (TCNs) to estimate 3D joints or keypoints.
arXiv Detail & Related papers (2020-04-07T09:12:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.