$pi_t$- Enhancing the Precision of Eye Tracking using Iris Feature
Motion Vectors
- URL: http://arxiv.org/abs/2009.09348v1
- Date: Sun, 20 Sep 2020 04:57:12 GMT
- Title: $pi_t$- Enhancing the Precision of Eye Tracking using Iris Feature
Motion Vectors
- Authors: Aayush K. Chaudhary, Jeff B. Pelz
- Abstract summary: A new high-precision eye-tracking method has been demonstrated recently by tracking the motion of iris features.
It suffers from temporal drift, an inability to track across blinks, and loss of texture matches in the presence of motion blur.
We present a new methodology $pi_t$ to address these issues by optimally combining the information from both iris textures and pupil edges.
- Score: 2.5889737226898437
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A new high-precision eye-tracking method has been demonstrated recently by
tracking the motion of iris features rather than by exploiting pupil edges.
While the method provides high precision, it suffers from temporal drift, an
inability to track across blinks, and loss of texture matches in the presence
of motion blur. In this work, we present a new methodology $pi_t$ to address
these issues by optimally combining the information from both iris textures and
pupil edges. With this method, we show an improvement in precision (S2S-RMS &
STD) of at least 48% and 10% respectively while fixating a series of small
targets and following a smoothly moving target. Further, we demonstrate the
capability in the identification of microsaccades between targets separated by
0.2-degree.
Related papers
- EyeTrAES: Fine-grained, Low-Latency Eye Tracking via Adaptive Event Slicing [2.9795443606634917]
EyeTrAES is a novel approach using neuromorphic event cameras for high-fidelity tracking of natural pupillary movement.
We show that EyeTrAES boosts pupil tracking fidelity by 6+%, achieving IoU=92%, while incurring at least 3x lower latency than competing pure event-based eye tracking alternatives.
For robust user authentication, we train a lightweight per-user Random Forest classifier using a novel feature vector of short-term pupillary kinematics.
arXiv Detail & Related papers (2024-09-27T15:06:05Z) - Learning to Make Keypoints Sub-Pixel Accurate [80.55676599677824]
This work addresses the challenge of sub-pixel accuracy in detecting 2D local features.
We propose a novel network that enhances any detector with sub-pixel precision by learning an offset vector for detected features.
arXiv Detail & Related papers (2024-07-16T12:39:56Z) - Using Deep Learning to Increase Eye-Tracking Robustness, Accuracy, and Precision in Virtual Reality [2.2639735235640015]
This work provides an objective assessment of the impact of several contemporary machine learning (ML)-based methods for eye feature tracking.
Metrics include the accuracy and precision of the gaze estimate, as well as drop-out rate.
arXiv Detail & Related papers (2024-03-28T18:43:25Z) - Object-centric Cross-modal Feature Distillation for Event-based Object
Detection [87.50272918262361]
RGB detectors still outperform event-based detectors due to sparsity of the event data and missing visual details.
We develop a novel knowledge distillation approach to shrink the performance gap between these two modalities.
We show that object-centric distillation allows to significantly improve the performance of the event-based student object detector.
arXiv Detail & Related papers (2023-11-09T16:33:08Z) - EasyHeC: Accurate and Automatic Hand-eye Calibration via Differentiable
Rendering and Space Exploration [49.90228618894857]
We introduce a new approach to hand-eye calibration called EasyHeC, which is markerless, white-box, and delivers superior accuracy and robustness.
We propose to use two key technologies: differentiable rendering-based camera pose optimization and consistency-based joint space exploration.
Our evaluation demonstrates superior performance in synthetic and real-world datasets.
arXiv Detail & Related papers (2023-05-02T03:49:54Z) - Tactile-Filter: Interactive Tactile Perception for Part Mating [54.46221808805662]
Humans rely on touch and tactile sensing for a lot of dexterous manipulation tasks.
vision-based tactile sensors are being widely used for various robotic perception and control tasks.
We present a method for interactive perception using vision-based tactile sensors for a part mating task.
arXiv Detail & Related papers (2023-03-10T16:27:37Z) - Depth Monocular Estimation with Attention-based Encoder-Decoder Network
from Single Image [7.753378095194288]
Vision-based approaches have recently received much attention and can overcome these drawbacks.
In this work, we explore an extreme scenario in vision-based settings: estimate a depth map from one monocular image severely plagued by grid artifacts and blurry edges.
Our novel approach can find the focus of current image with minimal overhead and avoid losses of depth features.
arXiv Detail & Related papers (2022-10-24T23:01:25Z) - Compact multi-scale periocular recognition using SAFE features [63.48764893706088]
We present a new approach for periocular recognition based on the Symmetry Assessment by Feature Expansion (SAFE) descriptor.
We use the sclera center as single key point for feature extraction, highlighting the object-like identity properties that concentrates to this point unique of the eye.
arXiv Detail & Related papers (2022-10-18T11:46:38Z) - An efficient real-time target tracking algorithm using adaptive feature
fusion [5.629708188348423]
We propose an efficient real-time target tracking method based on a low-dimension adaptive feature fusion.
The proposed algorithm can obtain a higher success rate and accuracy, improving by 0.023 and 0.019, respectively.
The proposed method paves a more promising way for real-time target tracking tasks under a complex environment.
arXiv Detail & Related papers (2022-04-05T08:40:52Z) - An automatic differentiation system for the age of differential privacy [65.35244647521989]
Tritium is an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
We introduce Tritium, an automatic differentiation-based sensitivity analysis framework for differentially private (DP) machine learning (ML)
arXiv Detail & Related papers (2021-09-22T08:07:42Z) - Occlusion-robust Visual Markerless Bone Tracking for Computer-Assisted
Orthopaedic Surgery [41.681134859412246]
We propose a RGB-D sensing-based markerless tracking method that is robust against occlusion.
By using a high-quality commercial RGB-D camera, our proposed visual tracking method achieves an accuracy of 1-2 degress and 2-4 mm on a model knee.
arXiv Detail & Related papers (2021-08-24T09:49:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.