NeRF-Gaze: A Head-Eye Redirection Parametric Model for Gaze Estimation
- URL: http://arxiv.org/abs/2212.14710v1
- Date: Fri, 30 Dec 2022 13:52:28 GMT
- Title: NeRF-Gaze: A Head-Eye Redirection Parametric Model for Gaze Estimation
- Authors: Pengwei Yin, Jiawu Dai, Jingjing Wang, Di Xie and Shiliang Pu
- Abstract summary: We propose a novel Head-Eye redirection parametric model based on Neural Radiance Field.
Our model can decouple the face and eyes for separate neural rendering.
It can achieve the purpose of separately controlling the attributes of the face, identity, illumination, and eye gaze direction.
- Score: 37.977032771941715
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gaze estimation is the fundamental basis for many visual tasks. Yet, the high
cost of acquiring gaze datasets with 3D annotations hinders the optimization
and application of gaze estimation models. In this work, we propose a novel
Head-Eye redirection parametric model based on Neural Radiance Field, which
allows dense gaze data generation with view consistency and accurate gaze
direction. Moreover, our head-eye redirection parametric model can decouple the
face and eyes for separate neural rendering, so it can achieve the purpose of
separately controlling the attributes of the face, identity, illumination, and
eye gaze direction. Thus diverse 3D-aware gaze datasets could be obtained by
manipulating the latent code belonging to different face attributions in an
unsupervised manner. Extensive experiments on several benchmarks demonstrate
the effectiveness of our method in domain generalization and domain adaptation
for gaze estimation tasks.
Related papers
- Accurate Gaze Estimation using an Active-gaze Morphable Model [9.192482716410511]
Rather than regressing gaze direction directly from images, we show that adding a 3D shape model can improve gaze estimation accuracy.
We equip this with a geometric vergence model of gaze to give an active-gaze 3DMM'
Our method can learn with only the ground truth gaze target point and the camera parameters, without access to the ground truth gaze origin points.
arXiv Detail & Related papers (2023-01-30T18:51:14Z) - 3DGazeNet: Generalizing Gaze Estimation with Weak-Supervision from
Synthetic Views [67.00931529296788]
We propose to train general gaze estimation models which can be directly employed in novel environments without adaptation.
We create a large-scale dataset of diverse faces with gaze pseudo-annotations, which we extract based on the 3D geometry of the scene.
We test our method in the task of gaze generalization, in which we demonstrate improvement of up to 30% compared to state-of-the-art when no ground truth data are available.
arXiv Detail & Related papers (2022-12-06T14:15:17Z) - Free-HeadGAN: Neural Talking Head Synthesis with Explicit Gaze Control [54.079327030892244]
Free-HeadGAN is a person-generic neural talking head synthesis system.
We show that modeling faces with sparse 3D facial landmarks are sufficient for achieving state-of-the-art generative performance.
arXiv Detail & Related papers (2022-08-03T16:46:08Z) - Eye Gaze Estimation Model Analysis [2.4366811507669124]
We discuss various model types for eye gaze estimation and present the results from predicting gaze direction using eye landmarks in unconstrained settings.
In unconstrained real-world settings, feature-based and model-based methods are outperformed by recent appearance-based methods due to factors like illumination changes and other visual artifacts.
arXiv Detail & Related papers (2022-07-28T20:40:03Z) - Learning-by-Novel-View-Synthesis for Full-Face Appearance-based 3D Gaze
Estimation [8.929311633814411]
This work examines a novel approach for synthesizing gaze estimation training data based on monocular 3D face reconstruction.
Unlike prior works using multi-view reconstruction, photo-realistic CG models, or generative neural networks, our approach can manipulate and extend the head pose range of existing training data.
arXiv Detail & Related papers (2022-01-20T00:29:45Z) - Learning Generative Vision Transformer with Energy-Based Latent Space
for Saliency Prediction [51.80191416661064]
We propose a novel vision transformer with latent variables following an informative energy-based prior for salient object detection.
Both the vision transformer network and the energy-based prior model are jointly trained via Markov chain Monte Carlo-based maximum likelihood estimation.
With the generative vision transformer, we can easily obtain a pixel-wise uncertainty map from an image, which indicates the model confidence in predicting saliency from the image.
arXiv Detail & Related papers (2021-12-27T06:04:33Z) - Bayesian Eye Tracking [63.21413628808946]
Model-based eye tracking is susceptible to eye feature detection errors.
We propose a Bayesian framework for model-based eye tracking.
Compared to state-of-the-art model-based and learning-based methods, the proposed framework demonstrates significant improvement in generalization capability.
arXiv Detail & Related papers (2021-06-25T02:08:03Z) - Self-Learning Transformations for Improving Gaze and Head Redirection [49.61091281780071]
We propose a novel generative model for images of faces, that is capable of producing high-quality images under fine-grained control over eye gaze and head orientation angles.
This requires the disentangling of many appearance related factors including gaze and head orientation but also lighting, hue etc.
We show that explicitly disentangling task-irrelevant factors results in more accurate modelling of gaze and head orientation.
arXiv Detail & Related papers (2020-10-23T11:18:37Z) - 360-Degree Gaze Estimation in the Wild Using Multiple Zoom Scales [26.36068336169795]
We develop a model that mimics humans' ability to estimate the gaze by aggregating from focused looks.
The model avoids the need to extract clear eye patches.
We extend the model to handle the challenging task of 360-degree gaze estimation.
arXiv Detail & Related papers (2020-09-15T08:45:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.