Eye Gaze Estimation Model Analysis
- URL: http://arxiv.org/abs/2207.14373v1
- Date: Thu, 28 Jul 2022 20:40:03 GMT
- Title: Eye Gaze Estimation Model Analysis
- Authors: Aveena Kottwani, Ayush Kumar
- Abstract summary: We discuss various model types for eye gaze estimation and present the results from predicting gaze direction using eye landmarks in unconstrained settings.
In unconstrained real-world settings, feature-based and model-based methods are outperformed by recent appearance-based methods due to factors like illumination changes and other visual artifacts.
- Score: 2.4366811507669124
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We explore techniques for eye gaze estimation using machine learning. Eye
gaze estimation is a common problem for various behavior analysis and
human-computer interfaces. The purpose of this work is to discuss various model
types for eye gaze estimation and present the results from predicting gaze
direction using eye landmarks in unconstrained settings. In unconstrained
real-world settings, feature-based and model-based methods are outperformed by
recent appearance-based methods due to factors like illumination changes and
other visual artifacts. We discuss a learning-based method for eye region
landmark localization trained exclusively on synthetic data. We discuss how to
use detected landmarks as input to iterative model-fitting and lightweight
learning-based gaze estimation methods and how to use the model for
person-independent and personalized gaze estimations.
Related papers
- Zero-Shot Object-Centric Representation Learning [72.43369950684057]
We study current object-centric methods through the lens of zero-shot generalization.
We introduce a benchmark comprising eight different synthetic and real-world datasets.
We find that training on diverse real-world images improves transferability to unseen scenarios.
arXiv Detail & Related papers (2024-08-17T10:37:07Z) - Automatic Discovery of Visual Circuits [66.99553804855931]
We explore scalable methods for extracting the subgraph of a vision model's computational graph that underlies recognition of a specific visual concept.
We find that our approach extracts circuits that causally affect model output, and that editing these circuits can defend large pretrained models from adversarial attacks.
arXiv Detail & Related papers (2024-04-22T17:00:57Z) - Using Deep Learning to Increase Eye-Tracking Robustness, Accuracy, and Precision in Virtual Reality [2.2639735235640015]
This work provides an objective assessment of the impact of several contemporary machine learning (ML)-based methods for eye feature tracking.
Metrics include the accuracy and precision of the gaze estimate, as well as drop-out rate.
arXiv Detail & Related papers (2024-03-28T18:43:25Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - TempSAL -- Uncovering Temporal Information for Deep Saliency Prediction [64.63645677568384]
We introduce a novel saliency prediction model that learns to output saliency maps in sequential time intervals.
Our approach locally modulates the saliency predictions by combining the learned temporal maps.
Our code will be publicly available on GitHub.
arXiv Detail & Related papers (2023-01-05T22:10:16Z) - NeRF-Gaze: A Head-Eye Redirection Parametric Model for Gaze Estimation [37.977032771941715]
We propose a novel Head-Eye redirection parametric model based on Neural Radiance Field.
Our model can decouple the face and eyes for separate neural rendering.
It can achieve the purpose of separately controlling the attributes of the face, identity, illumination, and eye gaze direction.
arXiv Detail & Related papers (2022-12-30T13:52:28Z) - Active Gaze Control for Foveal Scene Exploration [124.11737060344052]
We propose a methodology to emulate how humans and robots with foveal cameras would explore a scene.
The proposed method achieves an increase in detection F1-score of 2-3 percentage points for the same number of gaze shifts.
arXiv Detail & Related papers (2022-08-24T14:59:28Z) - Bayesian Eye Tracking [63.21413628808946]
Model-based eye tracking is susceptible to eye feature detection errors.
We propose a Bayesian framework for model-based eye tracking.
Compared to state-of-the-art model-based and learning-based methods, the proposed framework demonstrates significant improvement in generalization capability.
arXiv Detail & Related papers (2021-06-25T02:08:03Z) - Appearance-based Gaze Estimation With Deep Learning: A Review and Benchmark [14.306488668615883]
We present a systematic review of the appearance-based gaze estimation methods using deep learning.
We summarize the data pre-processing and post-processing methods, including face/eye detection, data rectification, 2D/3D gaze conversion and gaze origin conversion.
arXiv Detail & Related papers (2021-04-26T15:53:03Z) - MLGaze: Machine Learning-Based Analysis of Gaze Error Patterns in
Consumer Eye Tracking Systems [0.0]
In this study, gaze error patterns produced by a commercial eye tracking device were studied with the help of machine learning algorithms.
It was seen that while the impact of the different error sources on gaze data characteristics were nearly impossible to distinguish by visual inspection or from data statistics, machine learning models were successful in identifying the impact of the different error sources and predicting the variability in gaze error levels due to these conditions.
arXiv Detail & Related papers (2020-05-07T23:07:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.