EFE: End-to-end Frame-to-Gaze Estimation
- URL: http://arxiv.org/abs/2305.05526v1
- Date: Tue, 9 May 2023 15:25:45 GMT
- Title: EFE: End-to-end Frame-to-Gaze Estimation
- Authors: Haldun Balim, Seonwook Park, Xi Wang, Xucong Zhang, Otmar Hilliges
- Abstract summary: We propose a frame-to-gaze network that directly predicts both 3D gaze origin and 3D gaze direction from the raw frame out of the camera without any face or eye cropping.
Our method demonstrates that direct gaze regression from the raw downscaled frame, from FHD/HD to VGA/HVGA resolution, is possible despite the challenges of having very few pixels in the eye region.
- Score: 42.61379693370926
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite the recent development of learning-based gaze estimation methods,
most methods require one or more eye or face region crops as inputs and produce
a gaze direction vector as output. Cropping results in a higher resolution in
the eye regions and having fewer confounding factors (such as clothing and
hair) is believed to benefit the final model performance. However, this
eye/face patch cropping process is expensive, erroneous, and
implementation-specific for different methods. In this paper, we propose a
frame-to-gaze network that directly predicts both 3D gaze origin and 3D gaze
direction from the raw frame out of the camera without any face or eye
cropping. Our method demonstrates that direct gaze regression from the raw
downscaled frame, from FHD/HD to VGA/HVGA resolution, is possible despite the
challenges of having very few pixels in the eye region. The proposed method
achieves comparable results to state-of-the-art methods in Point-of-Gaze (PoG)
estimation on three public gaze datasets: GazeCapture, MPIIFaceGaze, and EVE,
and generalizes well to extreme camera view changes.
Related papers
- Model-aware 3D Eye Gaze from Weak and Few-shot Supervisions [60.360919642038]
We propose to predict 3D eye gaze from weak supervision of eye semantic segmentation masks and direct supervision of a few 3D gaze vectors.
Our experiments in diverse settings illustrate the significant benefits of the proposed method, achieving about 5 degrees lower angular gaze error over the baseline.
arXiv Detail & Related papers (2023-11-20T20:22:55Z) - Semi-Synthetic Dataset Augmentation for Application-Specific Gaze
Estimation [0.3683202928838613]
We show how to generate a tridimensional mesh of the face and render the training images from a virtual camera at a specific position and orientation related to the application.
This leads to an average 47% decrease in gaze estimation angular error.
arXiv Detail & Related papers (2023-10-27T20:27:22Z) - Accurate Gaze Estimation using an Active-gaze Morphable Model [9.192482716410511]
Rather than regressing gaze direction directly from images, we show that adding a 3D shape model can improve gaze estimation accuracy.
We equip this with a geometric vergence model of gaze to give an active-gaze 3DMM'
Our method can learn with only the ground truth gaze target point and the camera parameters, without access to the ground truth gaze origin points.
arXiv Detail & Related papers (2023-01-30T18:51:14Z) - NeRF-Gaze: A Head-Eye Redirection Parametric Model for Gaze Estimation [37.977032771941715]
We propose a novel Head-Eye redirection parametric model based on Neural Radiance Field.
Our model can decouple the face and eyes for separate neural rendering.
It can achieve the purpose of separately controlling the attributes of the face, identity, illumination, and eye gaze direction.
arXiv Detail & Related papers (2022-12-30T13:52:28Z) - GazeNeRF: 3D-Aware Gaze Redirection with Neural Radiance Fields [100.53114092627577]
Existing gaze redirection methods operate on 2D images and struggle to generate 3D consistent results.
We build on the intuition that the face region and eyeballs are separate 3D structures that move in a coordinated yet independent fashion.
arXiv Detail & Related papers (2022-12-08T13:19:11Z) - 3DGazeNet: Generalizing Gaze Estimation with Weak-Supervision from
Synthetic Views [67.00931529296788]
We propose to train general gaze estimation models which can be directly employed in novel environments without adaptation.
We create a large-scale dataset of diverse faces with gaze pseudo-annotations, which we extract based on the 3D geometry of the scene.
We test our method in the task of gaze generalization, in which we demonstrate improvement of up to 30% compared to state-of-the-art when no ground truth data are available.
arXiv Detail & Related papers (2022-12-06T14:15:17Z) - Active Gaze Control for Foveal Scene Exploration [124.11737060344052]
We propose a methodology to emulate how humans and robots with foveal cameras would explore a scene.
The proposed method achieves an increase in detection F1-score of 2-3 percentage points for the same number of gaze shifts.
arXiv Detail & Related papers (2022-08-24T14:59:28Z) - GazeOnce: Real-Time Multi-Person Gaze Estimation [18.16091280655655]
Appearance-based gaze estimation aims to predict the 3D eye gaze direction from a single image.
Recent deep learning-based approaches have demonstrated excellent performance, but cannot output multi-person gaze in real time.
We propose GazeOnce, which is capable of simultaneously predicting gaze directions for multiple faces in an image.
arXiv Detail & Related papers (2022-04-20T14:21:47Z) - 360-Degree Gaze Estimation in the Wild Using Multiple Zoom Scales [26.36068336169795]
We develop a model that mimics humans' ability to estimate the gaze by aggregating from focused looks.
The model avoids the need to extract clear eye patches.
We extend the model to handle the challenging task of 360-degree gaze estimation.
arXiv Detail & Related papers (2020-09-15T08:45:12Z) - It's Written All Over Your Face: Full-Face Appearance-Based Gaze
Estimation [82.16380486281108]
We propose an appearance-based method that only takes the full face image as input.
Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps.
We show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation.
arXiv Detail & Related papers (2016-11-27T15:00:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.