Towards Precision in Appearance-based Gaze Estimation in the Wild
- URL: http://arxiv.org/abs/2302.02353v1
- Date: Sun, 5 Feb 2023 10:09:35 GMT
- Title: Towards Precision in Appearance-based Gaze Estimation in the Wild
- Authors: Murthy L.R.D., Abhishek Mukhopadhyay, Shambhavi Aggarwal, Ketan Anand,
Pradipta Biswas
- Abstract summary: We present a large gaze estimation dataset, PARKS-Gaze, with wider head pose and illumination variation.
The proposed dataset is more challenging and enable models to generalize on unseen participants better than the existing in-the-wild datasets.
- Score: 3.4253416336476246
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Appearance-based gaze estimation systems have shown great progress recently,
yet the performance of these techniques depend on the datasets used for
training. Most of the existing gaze estimation datasets setup in interactive
settings were recorded in laboratory conditions and those recorded in the wild
conditions display limited head pose and illumination variations. Further, we
observed little attention so far towards precision evaluations of existing gaze
estimation approaches. In this work, we present a large gaze estimation
dataset, PARKS-Gaze, with wider head pose and illumination variation and with
multiple samples for a single Point of Gaze (PoG). The dataset contains 974
minutes of data from 28 participants with a head pose range of 60 degrees in
both yaw and pitch directions. Our within-dataset and cross-dataset evaluations
and precision evaluations indicate that the proposed dataset is more
challenging and enable models to generalize on unseen participants better than
the existing in-the-wild datasets. The project page can be accessed here:
https://github.com/lrdmurthy/PARKS-Gaze
Related papers
- 3DGazeNet: Generalizing Gaze Estimation with Weak-Supervision from
Synthetic Views [67.00931529296788]
We propose to train general gaze estimation models which can be directly employed in novel environments without adaptation.
We create a large-scale dataset of diverse faces with gaze pseudo-annotations, which we extract based on the 3D geometry of the scene.
We test our method in the task of gaze generalization, in which we demonstrate improvement of up to 30% compared to state-of-the-art when no ground truth data are available.
arXiv Detail & Related papers (2022-12-06T14:15:17Z) - A Multi-purpose Real Haze Benchmark with Quantifiable Haze Levels and
Ground Truth [61.90504318229845]
This paper introduces the first paired real image benchmark dataset with hazy and haze-free images, and in-situ haze density measurements.
This dataset was produced in a controlled environment with professional smoke generating machines that covered the entire scene.
A subset of this dataset has been used for the Object Detection in Haze Track of CVPR UG2 2022 challenge.
arXiv Detail & Related papers (2022-06-13T19:14:06Z) - 360-Degree Gaze Estimation in the Wild Using Multiple Zoom Scales [26.36068336169795]
We develop a model that mimics humans' ability to estimate the gaze by aggregating from focused looks.
The model avoids the need to extract clear eye patches.
We extend the model to handle the challenging task of 360-degree gaze estimation.
arXiv Detail & Related papers (2020-09-15T08:45:12Z) - ETH-XGaze: A Large Scale Dataset for Gaze Estimation under Extreme Head
Pose and Gaze Variation [52.5465548207648]
ETH-XGaze is a new gaze estimation dataset consisting of over one million high-resolution images of varying gaze under extreme head poses.
We show that our dataset can significantly improve the robustness of gaze estimation methods across different head poses and gaze angles.
arXiv Detail & Related papers (2020-07-31T04:15:53Z) - Towards End-to-end Video-based Eye-Tracking [50.0630362419371]
Estimating eye-gaze from images alone is a challenging task due to un-observable person-specific factors.
We propose a novel dataset and accompanying method which aims to explicitly learn these semantic and temporal relationships.
We demonstrate that the fusion of information from visual stimuli as well as eye images can lead towards achieving performance similar to literature-reported figures.
arXiv Detail & Related papers (2020-07-26T12:39:15Z) - Speak2Label: Using Domain Knowledge for Creating a Large Scale Driver
Gaze Zone Estimation Dataset [55.391532084304494]
Driver Gaze in the Wild dataset contains 586 recordings, captured during different times of the day including evenings.
Driver Gaze in the Wild dataset contains 338 subjects with an age range of 18-63 years.
arXiv Detail & Related papers (2020-04-13T14:47:34Z) - Learning to Detect Head Movement in Unconstrained Remote Gaze Estimation
in the Wild [19.829721663742124]
We propose end-to-end appearance-based gaze estimation methods that could more robustly incorporate different levels of head-pose representations into gaze estimation.
Our method could generalize to real-world scenarios with low image quality, different lightings and scenarios where direct head-pose information is not available.
arXiv Detail & Related papers (2020-04-07T22:38:49Z) - Towards High Performance Low Complexity Calibration in Appearance Based
Gaze Estimation [7.857571508499849]
Appearance-based gaze estimation from RGB images provides relatively unconstrained gaze tracking.
We analyze the effect of the number of gaze targets, the number of images used per gaze target and the number of head positions in calibration data.
Using only a single gaze target and single head position is sufficient to achieve high quality calibration, outperforming state-of-the-art methods by more than 6.3%.
arXiv Detail & Related papers (2020-01-25T09:30:06Z) - GraspNet: A Large-Scale Clustered and Densely Annotated Dataset for
Object Grasping [49.777649953381676]
We contribute a large-scale grasp pose detection dataset with a unified evaluation system.
Our dataset contains 87,040 RGBD images with over 370 million grasp poses.
arXiv Detail & Related papers (2019-12-31T18:15:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.