Evaluating Sensitivity Parameters in Smartphone-Based Gaze Estimation: A Comparative Study of Appearance-Based and Infrared Eye Trackers
- URL: http://arxiv.org/abs/2506.11932v3
- Date: Sat, 21 Jun 2025 16:46:23 GMT
- Title: Evaluating Sensitivity Parameters in Smartphone-Based Gaze Estimation: A Comparative Study of Appearance-Based and Infrared Eye Trackers
- Authors: Nishan Gunawardena, Gough Yumu Lui, Bahman Javadi, Jeewani Anupama Ginige,
- Abstract summary: This study evaluates a smartphone-based, deep-learning eye-tracking algorithm by comparing its performance against a commercial infrared-based eye tracker.<n>The aim is to investigate the feasibility of appearance-based gaze estimation under realistic mobile usage conditions.
- Score: 2.9123921488295768
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: This study evaluates a smartphone-based, deep-learning eye-tracking algorithm by comparing its performance against a commercial infrared-based eye tracker, the Tobii Pro Nano. The aim is to investigate the feasibility of appearance-based gaze estimation under realistic mobile usage conditions. Key sensitivity factors, including age, gender, vision correction, lighting conditions, device type, and head position, were systematically analysed. The appearance-based algorithm integrates a lightweight convolutional neural network (MobileNet-V3) with a recurrent structure (Long Short-Term Memory) to predict gaze coordinates from grayscale facial images. Gaze data were collected from 51 participants using dynamic visual stimuli, and accuracy was measured using Euclidean distance. The deep learning model produced a mean error of 17.76 mm, compared to 16.53 mm for the Tobii Pro Nano. While overall accuracy differences were small, the deep learning-based method was more sensitive to factors such as lighting, vision correction, and age, with higher failure rates observed under low-light conditions among participants using glasses and in older age groups. Device-specific and positional factors also influenced tracking performance. These results highlight the potential of appearance-based approaches for mobile eye tracking and offer a reference framework for evaluating gaze estimation systems across varied usage conditions.
Related papers
- Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.<n>In this paper, we investigate how detection performance varies across model backbones, types, and datasets.<n>We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - Using Deep Learning to Increase Eye-Tracking Robustness, Accuracy, and Precision in Virtual Reality [2.2639735235640015]
This work provides an objective assessment of the impact of several contemporary machine learning (ML)-based methods for eye feature tracking.
Metrics include the accuracy and precision of the gaze estimate, as well as drop-out rate.
arXiv Detail & Related papers (2024-03-28T18:43:25Z) - PhyOT: Physics-informed object tracking in surveillance cameras [0.2633434651741688]
We consider the case of object tracking, and evaluate a hybrid model (PhyOT) that conceptualizes deep neural networks as sensors''
Our experiments combine three neural networks, performing position, indirect velocity and acceleration estimation, respectively, and evaluate such a formulation on two benchmark datasets.
Results suggest that our PhyOT can track objects in extreme conditions that the state-of-the-art deep neural networks fail.
arXiv Detail & Related papers (2023-12-14T04:15:55Z) - Remote Bio-Sensing: Open Source Benchmark Framework for Fair Evaluation
of rPPG [2.82697733014759]
r (pg photoplethysmography) is a technology that measures and analyzes BVP (Blood Volume Pulse) by using the light absorption characteristics of hemoglobin captured through a camera.
This study is to provide a framework to evaluate various r benchmarking techniques across a wide range of datasets for fair evaluation and comparison.
arXiv Detail & Related papers (2023-07-24T09:35:47Z) - Multimodal Adaptive Fusion of Face and Gait Features using Keyless
attention based Deep Neural Networks for Human Identification [67.64124512185087]
Soft biometrics such as gait are widely used with face in surveillance tasks like person recognition and re-identification.
We propose a novel adaptive multi-biometric fusion strategy for the dynamic incorporation of gait and face biometric cues by leveraging keyless attention deep neural networks.
arXiv Detail & Related papers (2023-03-24T05:28:35Z) - ColorSense: A Study on Color Vision in Machine Visual Recognition [57.916512479603064]
We collect 110,000 non-trivial human annotations of foreground and background color labels from visual recognition benchmarks.<n>We validate the use of our datasets by demonstrating that the level of color discrimination has a dominating effect on the performance of machine perception models.<n>Our findings suggest that object recognition tasks such as classification and localization are susceptible to color vision bias.
arXiv Detail & Related papers (2022-12-16T18:51:41Z) - Near-infrared and visible-light periocular recognition with Gabor
features using frequency-adaptive automatic eye detection [69.35569554213679]
Periocular recognition has gained attention recently due to demands of increased robustness of face or iris in less controlled scenarios.
We present a new system for eye detection based on complex symmetry filters, which has the advantage of not needing training.
This system is used as input to a periocular algorithm based on retinotopic sampling grids and Gabor spectrum decomposition.
arXiv Detail & Related papers (2022-11-10T13:04:03Z) - An Efficient Point of Gaze Estimator for Low-Resolution Imaging Systems
Using Extracted Ocular Features Based Neural Architecture [2.8728982844941187]
This paper introduces a neural network based architecture to predict users' gaze at 9 positions displayed in the 11.31deg visual range on the screen.
The eye tracking system can be incorporated by physically disabled individuals, fitted best for those who have eyes as only a limited set of communication.
arXiv Detail & Related papers (2021-06-09T14:35:55Z) - Towards End-to-end Video-based Eye-Tracking [50.0630362419371]
Estimating eye-gaze from images alone is a challenging task due to un-observable person-specific factors.
We propose a novel dataset and accompanying method which aims to explicitly learn these semantic and temporal relationships.
We demonstrate that the fusion of information from visual stimuli as well as eye images can lead towards achieving performance similar to literature-reported figures.
arXiv Detail & Related papers (2020-07-26T12:39:15Z) - MLGaze: Machine Learning-Based Analysis of Gaze Error Patterns in
Consumer Eye Tracking Systems [0.0]
In this study, gaze error patterns produced by a commercial eye tracking device were studied with the help of machine learning algorithms.
It was seen that while the impact of the different error sources on gaze data characteristics were nearly impossible to distinguish by visual inspection or from data statistics, machine learning models were successful in identifying the impact of the different error sources and predicting the variability in gaze error levels due to these conditions.
arXiv Detail & Related papers (2020-05-07T23:07:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.