Pistol: Pupil Invisible Supportive Tool to extract Pupil, Iris, Eye
Opening, Eye Movements, Pupil and Iris Gaze Vector, and 2D as well as 3D Gaze
- URL: http://arxiv.org/abs/2201.06799v1
- Date: Tue, 18 Jan 2022 07:54:55 GMT
- Title: Pistol: Pupil Invisible Supportive Tool to extract Pupil, Iris, Eye
Opening, Eye Movements, Pupil and Iris Gaze Vector, and 2D as well as 3D Gaze
- Authors: Wolfgang Fuhl, Daniel Weber, Enkelejda Kasneci
- Abstract summary: In offline mode, our software extracts multiple features from the eye including, the pupil and iris ellipse, eye aperture, pupil vector, iris vector, eye movement types from pupil and iris velocities, marker detection, marker distance, 2D gaze estimation for the pupil center, iris center, pupil vector, and iris vector.
The gaze signal is computed in 2D for each eye and each feature separately and for both eyes in 3D also for each feature separately.
- Score: 12.314175125417098
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper describes a feature extraction and gaze estimation software, named
Pistol that can be used with Pupil Invisible projects and other eye trackers in
the future. In offline mode, our software extracts multiple features from the
eye including, the pupil and iris ellipse, eye aperture, pupil vector, iris
vector, eye movement types from pupil and iris velocities, marker detection,
marker distance, 2D gaze estimation for the pupil center, iris center, pupil
vector, and iris vector using Levenberg Marquart fitting and neural networks.
The gaze signal is computed in 2D for each eye and each feature separately and
for both eyes in 3D also for each feature separately. We hope this software
helps other researchers to extract state-of-the-art features for their research
out of their recordings.
Related papers
- GALA: Guided Attention with Language Alignment for Open Vocabulary Gaussian Splatting [74.56128224977279]
We present GALA, a novel framework for open-vocabulary 3D scene understanding with 3D Gaussian Splatting (3DGS)<n>GALA distills a scene-specific 3D instance feature field via self-supervised contrastive learning.<n>It supports seamless 2D and 3D open-vocabulary queries and reduces memory consumption by avoiding per-Gaussian high-dimensional feature learning.
arXiv Detail & Related papers (2025-08-19T21:26:49Z) - Neuro-3D: Towards 3D Visual Decoding from EEG Signals [49.502364730056044]
We introduce a new neuroscience task: decoding 3D visual perception from EEG signals.
We first present EEG-3D, a dataset featuring multimodal analysis data and EEG recordings from 12 subjects viewing 72 categories of 3D objects rendered in both videos and images.
We propose Neuro-3D, a 3D visual decoding framework based on EEG signals.
arXiv Detail & Related papers (2024-11-19T05:52:17Z) - EyeTrAES: Fine-grained, Low-Latency Eye Tracking via Adaptive Event Slicing [2.9795443606634917]
EyeTrAES is a novel approach using neuromorphic event cameras for high-fidelity tracking of natural pupillary movement.
We show that EyeTrAES boosts pupil tracking fidelity by 6+%, achieving IoU=92%, while incurring at least 3x lower latency than competing pure event-based eye tracking alternatives.
For robust user authentication, we train a lightweight per-user Random Forest classifier using a novel feature vector of short-term pupillary kinematics.
arXiv Detail & Related papers (2024-09-27T15:06:05Z) - CondSeg: Ellipse Estimation of Pupil and Iris via Conditioned Segmentation [9.680930476240674]
Parsing eye components (i.e. pupil, iris and sclera) is fundamental for eye tracking and gaze estimation for AR/VR products.
In this paper, we consider two priors: projected full pupil/iris circle can be modelled with ellipses (ellipse prior), and the visibility of pupil/iris is controlled by openness of eye-region.
We propose CondSeg to estimate elliptical parameters of pupil/iris directly from segmentation labels, without explicitly annotating full ellipses.
arXiv Detail & Related papers (2024-08-30T12:17:49Z) - PupilSense: A Novel Application for Webcam-Based Pupil Diameter Estimation [6.298516754485939]
This paper presents a novel application that enables pupil diameter estimation using standard webcams.
Our app estimates pupil diameters from videos and offers detailed analysis, including class activation maps, graphs of predicted left and right pupil diameters, and eye aspect ratios during blinks.
arXiv Detail & Related papers (2024-07-15T19:39:28Z) - Using artificial intelligence methods for the studyed visual analyzer [0.0]
The paper describes how various techniques for applying artificial intelligence to the study of human eyes are utilized.
The first dataset was collected using computerized perimetry to investigate the visualization of the human visual field and the diagnosis of glaucoma.
The second dataset was obtained, as part of the implementation of a Russian-Swiss experiment to collect and analyze eye movement data using the Tobii Pro Glasses 3 device on VR video.
arXiv Detail & Related papers (2024-04-25T20:12:51Z) - Model-aware 3D Eye Gaze from Weak and Few-shot Supervisions [60.360919642038]
We propose to predict 3D eye gaze from weak supervision of eye semantic segmentation masks and direct supervision of a few 3D gaze vectors.
Our experiments in diverse settings illustrate the significant benefits of the proposed method, achieving about 5 degrees lower angular gaze error over the baseline.
arXiv Detail & Related papers (2023-11-20T20:22:55Z) - Periocular biometrics: databases, algorithms and directions [69.35569554213679]
Periocular biometrics has been established as an independent modality due to concerns on the performance of iris or face systems in uncontrolled conditions.
This paper presents a review of the state of the art in periocular biometric research.
arXiv Detail & Related papers (2023-07-26T11:14:36Z) - GazeNeRF: 3D-Aware Gaze Redirection with Neural Radiance Fields [100.53114092627577]
Existing gaze redirection methods operate on 2D images and struggle to generate 3D consistent results.
We build on the intuition that the face region and eyeballs are separate 3D structures that move in a coordinated yet independent fashion.
arXiv Detail & Related papers (2022-12-08T13:19:11Z) - Neural Feature Fusion Fields: 3D Distillation of Self-Supervised 2D
Image Representations [92.88108411154255]
We present a method that improves dense 2D image feature extractors when the latter are applied to the analysis of multiple images reconstructible as a 3D scene.
We show that our method not only enables semantic understanding in the context of scene-specific neural fields without the use of manual labels, but also consistently improves over the self-supervised 2D baselines.
arXiv Detail & Related papers (2022-09-07T23:24:09Z) - P2-Net: Joint Description and Detection of Local Features for Pixel and
Point Matching [78.18641868402901]
This work takes the initiative to establish fine-grained correspondences between 2D images and 3D point clouds.
An ultra-wide reception mechanism in combination with a novel loss function are designed to mitigate the intrinsic information variations between pixel and point local regions.
arXiv Detail & Related papers (2021-03-01T14:59:40Z) - TEyeD: Over 20 million real-world eye images with Pupil, Eyelid, and
Iris 2D and 3D Segmentations, 2D and 3D Landmarks, 3D Eyeball, Gaze Vector,
and Eye Movement Types [18.53571873938032]
TEyeD is the world's largest unified public data set of eye images taken with head-mounted devices.
The data set includes 2D and 3D landmarks, semantic segmentation, 3D eyeball annotation and the gaze vector and eye movement types for all images.
arXiv Detail & Related papers (2021-02-03T15:48:22Z) - EllSeg: An Ellipse Segmentation Framework for Robust Gaze Tracking [3.0448872422956432]
Ellipse fitting is an essential component in pupil or iris tracking based video oculography.
We propose training a convolutional neural network to directly segment entire elliptical structures.
arXiv Detail & Related papers (2020-07-19T06:13:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.