LNSMM: Eye Gaze Estimation With Local Network Share Multiview Multitask
- URL: http://arxiv.org/abs/2101.07116v1
- Date: Mon, 18 Jan 2021 15:14:24 GMT
- Title: LNSMM: Eye Gaze Estimation With Local Network Share Multiview Multitask
- Authors: Yong Huang, Ben Chen, Daiming Qu
- Abstract summary: We propose a novel methodology to estimate eye gaze points and eye gaze directions simultaneously.
The experiment show our method is state-of-the-art the current mainstream methods on two indicators of gaze points and gaze directions.
- Score: 7.065909514483728
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Eye gaze estimation has become increasingly significant in computer vision.In
this paper,we systematically study the mainstream of eye gaze estimation
methods,propose a novel methodology to estimate eye gaze points and eye gaze
directions simultaneously.First,we construct a local sharing network for
feature extraction of gaze points and gaze directions estimation,which can
reduce network computational parameters and converge quickly;Second,we propose
a Multiview Multitask Learning (MTL) framework,for gaze directions,a coplanar
constraint is proposed for the left and right eyes,for gaze points,three views
data input indirectly introduces eye position information,a cross-view pooling
module is designed, propose joint loss which handle both gaze points and gaze
directions estimation.Eventually,we collect a dataset to use of gaze
points,which have three views to exist public dataset.The experiment show our
method is state-of-the-art the current mainstream methods on two indicators of
gaze points and gaze directions.
Related papers
- Freeview Sketching: View-Aware Fine-Grained Sketch-Based Image Retrieval [85.73149096516543]
We address the choice of viewpoint during sketch creation in Fine-Grained Sketch-Based Image Retrieval (FG-SBIR)
A pilot study highlights the system's struggle when query-sketches differ in viewpoint from target instances.
To reconcile this, we advocate for a view-aware system, seamlessly accommodating both view-agnostic and view-specific tasks.
arXiv Detail & Related papers (2024-07-01T21:20:44Z) - NeRF-Gaze: A Head-Eye Redirection Parametric Model for Gaze Estimation [37.977032771941715]
We propose a novel Head-Eye redirection parametric model based on Neural Radiance Field.
Our model can decouple the face and eyes for separate neural rendering.
It can achieve the purpose of separately controlling the attributes of the face, identity, illumination, and eye gaze direction.
arXiv Detail & Related papers (2022-12-30T13:52:28Z) - Active Gaze Control for Foveal Scene Exploration [124.11737060344052]
We propose a methodology to emulate how humans and robots with foveal cameras would explore a scene.
The proposed method achieves an increase in detection F1-score of 2-3 percentage points for the same number of gaze shifts.
arXiv Detail & Related papers (2022-08-24T14:59:28Z) - RAZE: Region Guided Self-Supervised Gaze Representation Learning [5.919214040221055]
RAZE is a Region guided self-supervised gAZE representation learning framework which leverage from non-annotated facial image data.
Ize-Net is a capsule layer based CNN architecture which can efficiently capture rich eye representation.
arXiv Detail & Related papers (2022-08-04T06:23:49Z) - GFNet: Geometric Flow Network for 3D Point Cloud Semantic Segmentation [91.15865862160088]
We introduce a geometric flow network (GFNet) to explore the geometric correspondence between different views in an align-before-fuse manner.
Specifically, we devise a novel geometric flow module (GFM) to bidirectionally align and propagate the complementary information across different views.
arXiv Detail & Related papers (2022-07-06T11:48:08Z) - Peripheral Vision Transformer [52.55309200601883]
We take a biologically inspired approach and explore to model peripheral vision in deep neural networks for visual recognition.
We propose to incorporate peripheral position encoding to the multi-head self-attention layers to let the network learn to partition the visual field into diverse peripheral regions given training data.
We evaluate the proposed network, dubbed PerViT, on the large-scale ImageNet dataset and systematically investigate the inner workings of the model for machine perception.
arXiv Detail & Related papers (2022-06-14T12:47:47Z) - Vis2Mesh: Efficient Mesh Reconstruction from Unstructured Point Clouds
of Large Scenes with Learned Virtual View Visibility [17.929307870456416]
We present a novel framework for mesh reconstruction from unstructured point clouds.
We take advantage of the learned visibility of the 3D points in the virtual views and traditional graph-cut based mesh generation.
arXiv Detail & Related papers (2021-08-18T20:28:16Z) - Bayesian Eye Tracking [63.21413628808946]
Model-based eye tracking is susceptible to eye feature detection errors.
We propose a Bayesian framework for model-based eye tracking.
Compared to state-of-the-art model-based and learning-based methods, the proposed framework demonstrates significant improvement in generalization capability.
arXiv Detail & Related papers (2021-06-25T02:08:03Z) - Towards End-to-end Video-based Eye-Tracking [50.0630362419371]
Estimating eye-gaze from images alone is a challenging task due to un-observable person-specific factors.
We propose a novel dataset and accompanying method which aims to explicitly learn these semantic and temporal relationships.
We demonstrate that the fusion of information from visual stimuli as well as eye images can lead towards achieving performance similar to literature-reported figures.
arXiv Detail & Related papers (2020-07-26T12:39:15Z) - A Coarse-to-Fine Adaptive Network for Appearance-Based Gaze Estimation [24.8796573846653]
We propose a coarse-to-fine strategy which estimates a basic gaze direction from face image and refines it with corresponding residual predicted from eye images.
We construct a coarse-to-fine adaptive network named CA-Net and achieve state-of-the-art performances on MPIIGaze and EyeDiap.
arXiv Detail & Related papers (2020-01-01T10:39:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.