UVAGaze: Unsupervised 1-to-2 Views Adaptation for Gaze Estimation
- URL: http://arxiv.org/abs/2312.15644v1
- Date: Mon, 25 Dec 2023 08:13:28 GMT
- Title: UVAGaze: Unsupervised 1-to-2 Views Adaptation for Gaze Estimation
- Authors: Ruicong Liu, Feng Lu
- Abstract summary: We propose a novel 1-view-to-2-views (1-to-2 views) adaptation solution for gaze estimation.
Our method adapts a traditional single-view gaze estimator for flexibly placed dual cameras.
Experiments show that a single-view estimator, when adapted for dual views, can achieve much higher accuracy, especially in cross-dataset settings.
- Score: 10.412375913640224
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Gaze estimation has become a subject of growing interest in recent research.
Most of the current methods rely on single-view facial images as input. Yet, it
is hard for these approaches to handle large head angles, leading to potential
inaccuracies in the estimation. To address this issue, adding a second-view
camera can help better capture eye appearance. However, existing multi-view
methods have two limitations. 1) They require multi-view annotations for
training, which are expensive. 2) More importantly, during testing, the exact
positions of the multiple cameras must be known and match those used in
training, which limits the application scenario. To address these challenges,
we propose a novel 1-view-to-2-views (1-to-2 views) adaptation solution in this
paper, the Unsupervised 1-to-2 Views Adaptation framework for Gaze estimation
(UVAGaze). Our method adapts a traditional single-view gaze estimator for
flexibly placed dual cameras. Here, the "flexibly" means we place the dual
cameras in arbitrary places regardless of the training data, without knowing
their extrinsic parameters. Specifically, the UVAGaze builds a dual-view mutual
supervision adaptation strategy, which takes advantage of the intrinsic
consistency of gaze directions between both views. In this way, our method can
not only benefit from common single-view pre-training, but also achieve more
advanced dual-view gaze estimation. The experimental results show that a
single-view estimator, when adapted for dual views, can achieve much higher
accuracy, especially in cross-dataset settings, with a substantial improvement
of 47.0%. Project page: https://github.com/MickeyLLG/UVAGaze.
Related papers
- Which Viewpoint Shows it Best? Language for Weakly Supervising View Selection in Multi-view Videos [66.1935609072708]
Key hypothesis is that the more accurately an individual view can predict a view-agnostic text summary, the more informative it is.
We propose a framework that uses the relative accuracy of view-dependent caption predictions as a proxy for best view pseudo-labels.
During inference, our model takes as input only a multi-view video -- no language or camera poses -- and returns the best viewpoint to watch at each timestep.
arXiv Detail & Related papers (2024-11-13T16:31:08Z) - Merging Multiple Datasets for Improved Appearance-Based Gaze Estimation [10.682719521609743]
Two-stage Transformer-based Gaze-feature Fusion (TTGF) method uses transformers to merge information from each eye and the face separately and then merge across the two eyes.
Our proposed Gaze Adaptation Module (GAM) method handles annotation inconsis-tency by applying a Gaze Adaption Module for each dataset to correct gaze estimates from a single shared estimator.
arXiv Detail & Related papers (2024-09-02T02:51:40Z) - POV: Prompt-Oriented View-Agnostic Learning for Egocentric Hand-Object
Interaction in the Multi-View World [59.545114016224254]
Humans are good at translating third-person observations of hand-object interactions into an egocentric view.
We propose a Prompt-Oriented View-agnostic learning framework, which enables this view adaptation with few egocentric videos.
arXiv Detail & Related papers (2024-03-09T09:54:44Z) - Single-to-Dual-View Adaptation for Egocentric 3D Hand Pose Estimation [16.95807780754898]
We propose a novel Single-to-Dual-view adaptation (S2DHand) solution that adapts a pre-trained single-view estimator to dual views.
S2DHand achieves significant improvements on arbitrary camera pairs under both in-dataset and cross-dataset settings.
arXiv Detail & Related papers (2024-03-07T10:14:23Z) - DVGaze: Dual-View Gaze Estimation [13.3539097295729]
We propose a dual-view gaze estimation network (DV-Gaze) for gaze estimation.
DV-Gaze achieves state-of-the-art performance on ETH-XGaze and EVE datasets.
arXiv Detail & Related papers (2023-08-20T16:14:22Z) - Two-level Data Augmentation for Calibrated Multi-view Detection [51.5746691103591]
We introduce a new multi-view data augmentation pipeline that preserves alignment among views.
We also propose a second level of augmentation applied directly at the scene level.
When combined with our simple multi-view detection model, our two-level augmentation pipeline outperforms all existing baselines.
arXiv Detail & Related papers (2022-10-19T17:55:13Z) - Unsupervised High-Resolution Portrait Gaze Correction and Animation [81.19271523855554]
This paper proposes a gaze correction and animation method for high-resolution, unconstrained portrait images.
We first create two new portrait datasets: CelebGaze and high-resolution CelebHQGaze.
We formulate the gaze correction task as an image inpainting problem, addressed using a Gaze Correction Module and a Gaze Animation Module.
arXiv Detail & Related papers (2022-07-01T08:14:42Z) - GazeOnce: Real-Time Multi-Person Gaze Estimation [18.16091280655655]
Appearance-based gaze estimation aims to predict the 3D eye gaze direction from a single image.
Recent deep learning-based approaches have demonstrated excellent performance, but cannot output multi-person gaze in real time.
We propose GazeOnce, which is capable of simultaneously predicting gaze directions for multiple faces in an image.
arXiv Detail & Related papers (2022-04-20T14:21:47Z) - 360-Degree Gaze Estimation in the Wild Using Multiple Zoom Scales [26.36068336169795]
We develop a model that mimics humans' ability to estimate the gaze by aggregating from focused looks.
The model avoids the need to extract clear eye patches.
We extend the model to handle the challenging task of 360-degree gaze estimation.
arXiv Detail & Related papers (2020-09-15T08:45:12Z) - Dual In-painting Model for Unsupervised Gaze Correction and Animation in
the Wild [82.42401132933462]
We present a solution that works without the need for precise annotations of the gaze angle and the head pose.
Our method consists of three novel modules: the Gaze Correction module (GCM), the Gaze Animation module (GAM), and the Pretrained Autoencoder module (PAM)
arXiv Detail & Related papers (2020-08-09T23:14:16Z) - Coarse-to-Fine Gaze Redirection with Numerical and Pictorial Guidance [74.27389895574422]
We propose a novel gaze redirection framework which exploits both a numerical and a pictorial direction guidance.
The proposed method outperforms the state-of-the-art approaches in terms of both image quality and redirection precision.
arXiv Detail & Related papers (2020-04-07T01:17:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.