It's Written All Over Your Face: Full-Face Appearance-Based Gaze
Estimation
- URL: http://arxiv.org/abs/1611.08860v4
- Date: Tue, 16 May 2023 10:00:49 GMT
- Title: It's Written All Over Your Face: Full-Face Appearance-Based Gaze
Estimation
- Authors: Xucong Zhang, Yusuke Sugano, Mario Fritz, Andreas Bulling
- Abstract summary: We propose an appearance-based method that only takes the full face image as input.
Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps.
We show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation.
- Score: 82.16380486281108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Eye gaze is an important non-verbal cue for human affect analysis. Recent
gaze estimation work indicated that information from the full face region can
benefit performance. Pushing this idea further, we propose an appearance-based
method that, in contrast to a long-standing line of work in computer vision,
only takes the full face image as input. Our method encodes the face image
using a convolutional neural network with spatial weights applied on the
feature maps to flexibly suppress or enhance information in different facial
regions. Through extensive evaluation, we show that our full-face method
significantly outperforms the state of the art for both 2D and 3D gaze
estimation, achieving improvements of up to 14.3% on MPIIGaze and 27.7% on
EYEDIAP for person-independent 3D gaze estimation. We further show that this
improvement is consistent across different illumination conditions and gaze
directions and particularly pronounced for the most challenging extreme head
poses.
Related papers
- Orientation-conditioned Facial Texture Mapping for Video-based Facial Remote Photoplethysmography Estimation [23.199005573530194]
We leverage the 3D facial surface to construct a novel orientation-conditioned video representation.
Our proposed method achieves a significant 18.2% performance improvement in cross-dataset testing on MMPD.
We demonstrate significant performance improvements of up to 29.6% in all tested motion scenarios.
arXiv Detail & Related papers (2024-04-14T23:30:35Z) - 3D Face Alignment Through Fusion of Head Pose Information and Features [0.6526824510982799]
We propose a novel method that employs head pose information to improve face alignment performance.
The proposed network structure performs robust face alignment through a dual-dimensional network.
We experimentally assessed the correlation between the predicted facial landmarks and head pose information, as well as variations in the accuracy of facial landmarks.
arXiv Detail & Related papers (2023-08-25T12:01:24Z) - Introducing Explicit Gaze Constraints to Face Swapping [1.9386396954290932]
Face swapping combines one face's identity with another face's non-appearance attributes to generate a synthetic face.
Image-based loss metrics that consider the full face do not effectively capture the perceptually important, yet spatially small, eye regions.
We propose a novel loss function that leverages gaze prediction to inform the face swap model during training and compare against existing methods.
arXiv Detail & Related papers (2023-05-25T15:12:08Z) - 3DGazeNet: Generalizing Gaze Estimation with Weak-Supervision from
Synthetic Views [67.00931529296788]
We propose to train general gaze estimation models which can be directly employed in novel environments without adaptation.
We create a large-scale dataset of diverse faces with gaze pseudo-annotations, which we extract based on the 3D geometry of the scene.
We test our method in the task of gaze generalization, in which we demonstrate improvement of up to 30% compared to state-of-the-art when no ground truth data are available.
arXiv Detail & Related papers (2022-12-06T14:15:17Z) - Facial Expressions as a Vulnerability in Face Recognition [73.85525896663371]
This work explores facial expression bias as a security vulnerability of face recognition systems.
We present a comprehensive analysis of how facial expression bias impacts the performance of face recognition technologies.
arXiv Detail & Related papers (2020-11-17T18:12:41Z) - Face Super-Resolution Guided by 3D Facial Priors [92.23902886737832]
We propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
Our work is the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes.
The proposed 3D priors achieve superior face super-resolution results over the state-of-the-arts.
arXiv Detail & Related papers (2020-07-18T15:26:07Z) - DeepFaceFlow: In-the-wild Dense 3D Facial Motion Estimation [56.56575063461169]
DeepFaceFlow is a robust, fast, and highly-accurate framework for the estimation of 3D non-rigid facial flow.
Our framework was trained and tested on two very large-scale facial video datasets.
Given registered pairs of images, our framework generates 3D flow maps at 60 fps.
arXiv Detail & Related papers (2020-05-14T23:56:48Z) - Real-time Facial Expression Recognition "In The Wild'' by Disentangling
3D Expression from Identity [6.974241731162878]
This paper proposes a novel method for human emotion recognition from a single RGB image.
We construct a large-scale dataset of facial videos, rich in facial dynamics, identities, expressions, appearance and 3D pose variations.
Our proposed framework runs at 50 frames per second and is capable of robustly estimating parameters of 3D expression variation.
arXiv Detail & Related papers (2020-05-12T01:32:55Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.