A Stochastic Nonlinear Dynamical System for Smoothing Noisy Eye Gaze Data
- URL: http://arxiv.org/abs/2504.13278v1
- Date: Thu, 17 Apr 2025 18:42:03 GMT
- Title: A Stochastic Nonlinear Dynamical System for Smoothing Noisy Eye Gaze Data
- Authors: Thoa Thieu, Roderick Melnik,
- Abstract summary: We propose the use of an extended Kalman filter (EKF) to smooth the gaze data collected during eye-tracking experiments.<n>Our results demonstrate that the EKF significantly reduces noise, leading to a marked improvement in tracking accuracy.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this study, we address the challenges associated with accurately determining gaze location on a screen, which is often compromised by noise from factors such as eye tracker limitations, calibration drift, ambient lighting changes, and eye blinks. We propose the use of an extended Kalman filter (EKF) to smooth the gaze data collected during eye-tracking experiments, and systematically explore the interaction of different system parameters. Our results demonstrate that the EKF significantly reduces noise, leading to a marked improvement in tracking accuracy. Furthermore, we show that our proposed stochastic nonlinear dynamical model aligns well with real experimental data and holds promise for applications in related fields.
Related papers
- WildGS-SLAM: Monocular Gaussian Splatting SLAM in Dynamic Environments [48.51530726697405]
We present WildGS-SLAM, a robust and efficient monocular RGB SLAM system designed to handle dynamic environments.<n>We introduce an uncertainty map, predicted by a shallow multi-layer perceptron and DINOv2 features, to guide dynamic object removal during both tracking and mapping.<n>Results showcase WildGS-SLAM's superior performance in dynamic environments compared to state-of-the-art methods.
arXiv Detail & Related papers (2025-04-04T19:19:40Z) - GazeSCRNN: Event-based Near-eye Gaze Tracking using a Spiking Neural Network [0.0]
This work introduces GazeSCRNN, a novel convolutional recurrent neural network designed for event-based near-eye gaze tracking.
Model processes event streams from DVS cameras using Adaptive Leaky-Integrate-and-Fire (ALIF) neurons and a hybrid architecture for-temporal data.
The most accurate model achieved a Mean Angle Error (MAE) of 6.034degdeg and a Mean Pupil Error (MPE) of 2.094 mm.
arXiv Detail & Related papers (2025-03-20T10:32:15Z) - Attraction-Repulsion Swarming: A Generalized Framework of t-SNE via Force Normalization and Tunable Interactions [2.3020018305241337]
ARS is a framework that is based on viewing the t-distributed data neighbor embedding (t-SNE) visualization technique as a swarm of interacting agents driven by attraction and repulsion forces.
ARS also includes the ability to separately tune the attraction and repulsion kernels, which gives the user control over the tightness within clusters and the spacing between them in the visualization.
arXiv Detail & Related papers (2024-11-15T22:42:11Z) - Generalizable Non-Line-of-Sight Imaging with Learnable Physical Priors [52.195637608631955]
Non-line-of-sight (NLOS) imaging has attracted increasing attention due to its potential applications.
Existing NLOS reconstruction approaches are constrained by the reliance on empirical physical priors.
We introduce a novel learning-based solution, comprising two key designs: Learnable Path Compensation (LPC) and Adaptive Phasor Field (APF)
arXiv Detail & Related papers (2024-09-21T04:39:45Z) - KFD-NeRF: Rethinking Dynamic NeRF with Kalman Filter [49.85369344101118]
We introduce KFD-NeRF, a novel dynamic neural radiance field integrated with an efficient and high-quality motion reconstruction framework based on Kalman filtering.
Our key idea is to model the dynamic radiance field as a dynamic system whose temporally varying states are estimated based on two sources of knowledge: observations and predictions.
Our KFD-NeRF demonstrates similar or even superior performance within comparable computational time and state-of-the-art view synthesis performance with thorough training.
arXiv Detail & Related papers (2024-07-18T05:48:24Z) - Change-Point Detection in Industrial Data Streams based on Online Dynamic Mode Decomposition with Control [5.293458740536858]
We propose a novel change-point detection method based on online Dynamic Mode Decomposition with control (ODMDwC)
Our results demonstrate that this method yields intuitive and improved detection results compared to the Singular-Value-Decomposition-based method.
arXiv Detail & Related papers (2024-07-08T14:18:33Z) - LEAP-VO: Long-term Effective Any Point Tracking for Visual Odometry [52.131996528655094]
We present the Long-term Effective Any Point Tracking (LEAP) module.
LEAP innovatively combines visual, inter-track, and temporal cues with mindfully selected anchors for dynamic track estimation.
Based on these traits, we develop LEAP-VO, a robust visual odometry system adept at handling occlusions and dynamic scenes.
arXiv Detail & Related papers (2024-01-03T18:57:27Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - Object recognition in atmospheric turbulence scenes [2.657505380055164]
We propose a novel framework that learns distorted features to detect and classify object types in turbulent environments.
Specifically, we utilise deformable convolutions to handle spatial displacement.
We show that the proposed framework outperforms the benchmark with a mean Average Precision (mAP) score exceeding 30%.
arXiv Detail & Related papers (2022-10-25T20:21:25Z) - DynImp: Dynamic Imputation for Wearable Sensing Data Through Sensory and
Temporal Relatedness [78.98998551326812]
We argue that traditional methods have rarely made use of both times-series dynamics of the data as well as the relatedness of the features from different sensors.
We propose a model, termed as DynImp, to handle different time point's missingness with nearest neighbors along feature axis.
We show that the method can exploit the multi-modality features from related sensors and also learn from history time-series dynamics to reconstruct the data under extreme missingness.
arXiv Detail & Related papers (2022-09-26T21:59:14Z) - A Look at Improving Robustness in Visual-inertial SLAM by Moment
Matching [17.995121900076615]
This paper takes a critical look at the practical implications and limitations posed by the extended Kalman filter (EKF)
We employ a moment matching (unscented Kalman filtering) approach to both visual-inertial odometry and visual SLAM.
arXiv Detail & Related papers (2022-05-27T08:22:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.