Accurate Eye Tracking from Dense 3D Surface Reconstructions using Single-Shot Deflectometry
- URL: http://arxiv.org/abs/2308.07298v3
- Date: Wed, 20 Nov 2024 01:20:02 GMT
- Title: Accurate Eye Tracking from Dense 3D Surface Reconstructions using Single-Shot Deflectometry
- Authors: Jiazhang Wang, Tianfu Wang, Bingjie Xu, Oliver Cossairt, Florian Willomitzer,
- Abstract summary: We propose a novel method for accurate and fast evaluation of the gaze direction that exploits teachings from single-shot phase-measuring-deflectometry(PMD)
Our method acquires dense 3D surface information of both cornea and sclera within only one single camera frame (single-shot)
We show the feasibility of our approach with experimentally evaluated gaze errors on a realistic model eye below only $0.12circ$.
- Score: 13.297188931807586
- License:
- Abstract: Eye-tracking plays a crucial role in the development of virtual reality devices, neuroscience research, and psychology. Despite its significance in numerous applications, achieving an accurate, robust, and fast eye-tracking solution remains a considerable challenge for current state-of-the-art methods. While existing reflection-based techniques (e.g., "glint tracking") are considered to be very accurate, their performance is limited by their reliance on sparse 3D surface data acquired solely from the cornea surface. In this paper, we rethink the way how specular reflections can be used for eye tracking: We propose a novel method for accurate and fast evaluation of the gaze direction that exploits teachings from single-shot phase-measuring-deflectometry(PMD). In contrast to state-of-the-art reflection-based methods, our method acquires dense 3D surface information of both cornea and sclera within only one single camera frame (single-shot). For a typical measurement, we acquire $>3000 \times$ more surface reflection points ("glints") than conventional methods. We show the feasibility of our approach with experimentally evaluated gaze errors on a realistic model eye below only $0.12^\circ$. Moreover, we demonstrate quantitative measurements on real human eyes in vivo, reaching accuracy values between only $0.46^\circ$ and $0.97^\circ$.
Related papers
- 3D Imaging of Complex Specular Surfaces by Fusing Polarimetric and Deflectometric Information [5.729076985389067]
We introduce a measurement principle that utilizes a novel technique to encode and decode the information contained in a light field reflected off a specular surface.
Our approach removes the unrealistic orthographic imaging assumption for SfP, which significantly improves the respective results.
We showcase our new technique by demonstrating single-shot and multi-shot measurements on complex-shaped specular surfaces.
arXiv Detail & Related papers (2024-06-04T06:24:07Z) - Low-cost Geometry-based Eye Gaze Detection using Facial Landmarks
Generated through Deep Learning [0.0937465283958018]
We leverage novel face landmark detection neural networks to generate accurate and stable 3D landmarks of the face and iris.
Our approach demonstrates the ability to predict gaze with an angular error of less than 1.9 degrees, rivaling state-of-the-art systems.
arXiv Detail & Related papers (2023-12-31T05:45:22Z) - DeepMetricEye: Metric Depth Estimation in Periocular VR Imagery [4.940128337433944]
We propose a lightweight framework derived from the U-Net 3+ deep learning backbone to estimate measurable periocular depth maps.
Our method reconstructs three-dimensional periocular regions, providing a metric basis for related light stimulus calculation protocols and medical guidelines.
Evaluated on a sample of 36 participants, our method exhibited notable efficacy in the periocular global precision evaluation experiment, and the pupil diameter measurement.
arXiv Detail & Related papers (2023-11-13T10:55:05Z) - Decaf: Monocular Deformation Capture for Face and Hand Interactions [77.75726740605748]
This paper introduces the first method that allows tracking human hands interacting with human faces in 3D from single monocular RGB videos.
We model hands as articulated objects inducing non-rigid face deformations during an active interaction.
Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system.
arXiv Detail & Related papers (2023-09-28T17:59:51Z) - Neural Point-based Volumetric Avatar: Surface-guided Neural Points for
Efficient and Photorealistic Volumetric Head Avatar [62.87222308616711]
We propose fullname (name), a method that adopts the neural point representation and the neural volume rendering process.
Specifically, the neural points are strategically constrained around the surface of the target expression via a high-resolution UV displacement map.
By design, our name is better equipped to handle topologically changing regions and thin structures while also ensuring accurate expression control when animating avatars.
arXiv Detail & Related papers (2023-07-11T03:40:10Z) - Optimization-Based Eye Tracking using Deflectometric Information [14.010352335803873]
State-of-the-art eye tracking methods are either-based and track reflections of sparse point light sources, or image-based and exploit 2D features of the acquired eye image.
We develop a differentiable pipeline based on PyTorch3D that simulates a virtual eye under screen illumination.
In general, our method does not require a specific pattern rendering and can work with ordinary video frames of the main VR/AR/MR screen itself.
arXiv Detail & Related papers (2023-03-09T02:41:13Z) - MonoDistill: Learning Spatial Features for Monocular 3D Object Detection [80.74622486604886]
We propose a simple and effective scheme to introduce the spatial information from LiDAR signals to the monocular 3D detectors.
We use the resulting data to train a 3D detector with the same architecture as the baseline model.
Experimental results show that the proposed method can significantly boost the performance of the baseline model.
arXiv Detail & Related papers (2022-01-26T09:21:41Z) - Probabilistic and Geometric Depth: Detecting Objects in Perspective [78.00922683083776]
3D object detection is an important capability needed in various practical applications such as driver assistance systems.
Monocular 3D detection, as an economical solution compared to conventional settings relying on binocular vision or LiDAR, has drawn increasing attention recently but still yields unsatisfactory results.
This paper first presents a systematic study on this problem and observes that the current monocular 3D detection problem can be simplified as an instance depth estimation problem.
arXiv Detail & Related papers (2021-07-29T16:30:33Z) - Single View Metrology in the Wild [94.7005246862618]
We present a novel approach to single view metrology that can recover the absolute scale of a scene represented by 3D heights of objects or camera height above the ground.
Our method relies on data-driven priors learned by a deep network specifically designed to imbibe weakly supervised constraints from the interplay of the unknown camera with 3D entities such as object heights.
We demonstrate state-of-the-art qualitative and quantitative results on several datasets as well as applications including virtual object insertion.
arXiv Detail & Related papers (2020-07-18T22:31:33Z) - D3VO: Deep Depth, Deep Pose and Deep Uncertainty for Monocular Visual
Odometry [57.5549733585324]
D3VO is a novel framework for monocular visual odometry that exploits deep networks on three levels -- deep depth, pose and uncertainty estimation.
We first propose a novel self-supervised monocular depth estimation network trained on stereo videos without any external supervision.
We model the photometric uncertainties of pixels on the input images, which improves the depth estimation accuracy.
arXiv Detail & Related papers (2020-03-02T17:47:13Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.