Identification of fixations and saccades in eye-tracking data using adaptive threshold-based method
- URL: http://arxiv.org/abs/2512.23926v2
- Date: Mon, 05 Jan 2026 00:42:28 GMT
- Title: Identification of fixations and saccades in eye-tracking data using adaptive threshold-based method
- Authors: Charles Oriioma, Josef Krivan, Rujeena Mathema, Pedro Lencastre, Pedro G. Lind, Alexander Szorkovszky, Shailendra Bhandari,
- Abstract summary: We introduce and evaluate an adaptive method based on a Markovian approximation of eye-gaze dynamics.<n>We find that a velocity threshold threshold achieves the highest baseline accuracy (90-93%) across both freeviewing and visual search tasks.<n> Adaptive thresholds demonstrate superior noise robustness, maintaining accuracy above 81% even at extreme noise levels.
- Score: 32.938529146937675
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Properties of ocular fixations and saccades are highly stochastic during many experimental tasks, and their statistics are often used as proxies for various aspects of cognition. Although distinguishing saccades from fixations is not trivial, experimentalists generally use common ad-hoc thresholds in detection algorithms. This neglects inter-task and inter-individual variability in oculomotor dynamics, and potentially biases the resulting statistics. In this article, we introduce and evaluate an adaptive method based on a Markovian approximation of eye-gaze dynamics, using saccades and fixations as states such that the optimal threshold minimizes state transitions. Applying this to three common threshold-based algorithms (velocity, angular velocity, and dispersion), we evaluate the overall accuracy against a multi-threshold benchmark as well as robustness to noise. We find that a velocity threshold achieves the highest baseline accuracy (90-93\%) across both free-viewing and visual search tasks. However, velocity-based methods degrade rapidly under noise when thresholds remain fixed, with accuracy falling below 20% at high noise levels. Adaptive threshold optimization via K-ratio minimization substantially improves performance under noisy conditions for all algorithms. Adaptive dispersion thresholds demonstrate superior noise robustness, maintaining accuracy above 81% even at extreme noise levels (σ = 50 px), though a precision-recall trade-off emerges that favors fixation detection at the expense of saccade identification. In addition to demonstrating our parsimonious adaptive thresholding method, these findings provide practical guidance for selecting and tuning classification algorithms based on data quality and analytical priorities.
Related papers
- Differentiable Maximum Likelihood Noise Estimation for Quantum Error Correction [3.1257175823346905]
We introduce a differentiable Likelihood Estimation framework that enables exact, efficient, and fully differentiable Maximum of syndrome log-likelihoods.<n>Our approach yields provably optimal, decoder-independent error priors by directly maximizing the syndrome likelihood.
arXiv Detail & Related papers (2026-02-23T11:20:23Z) - Autonomous Concept Drift Threshold Determination [29.617054108315546]
Existing drift detection methods focus on designing sensitive test statistics.<n>We observe that model performance is highly sensitive to this threshold.<n>In this paper, we prove that a threshold that adapts over time can outperform any single fixed threshold.
arXiv Detail & Related papers (2025-11-13T04:31:39Z) - Adaptive Estimation of Drifting Noise in Quantum Error Correction [1.1998722332188005]
We present a framework to capture time-dependent Pauli noise, by exploiting the syndrome statistics of quantum error correction experiments.<n>We prove the noise-filtering behavior of sliding windows, linking window size to spectral cutoff frequencies, and provide an iterative algorithm that captures multiple drift frequencies.<n>Our window-based estimation methods and adaptive decoding offer new insights into noise spectroscopy and decoder optimization under drift.
arXiv Detail & Related papers (2025-11-12T17:03:56Z) - Asymptotically Optimal Linear Best Feasible Arm Identification with Fixed Budget [55.938644481736446]
We introduce a novel algorithm for best feasible arm identification that guarantees an exponential decay in the error probability.<n>We validate our algorithm through comprehensive empirical evaluations across various problem instances with different levels of complexity.
arXiv Detail & Related papers (2025-06-03T02:56:26Z) - Gradient Normalization Provably Benefits Nonconvex SGD under Heavy-Tailed Noise [60.92029979853314]
We investigate the roles of gradient normalization and clipping in ensuring the convergence of Gradient Descent (SGD) under heavy-tailed noise.
Our work provides the first theoretical evidence demonstrating the benefits of gradient normalization in SGD under heavy-tailed noise.
We introduce an accelerated SGD variant incorporating gradient normalization and clipping, further enhancing convergence rates under heavy-tailed noise.
arXiv Detail & Related papers (2024-10-21T22:40:42Z) - SoftPatch: Unsupervised Anomaly Detection with Noisy Data [67.38948127630644]
This paper considers label-level noise in image sensory anomaly detection for the first time.
We propose a memory-based unsupervised AD method, SoftPatch, which efficiently denoises the data at the patch level.
Compared with existing methods, SoftPatch maintains a strong modeling ability of normal data and alleviates the overconfidence problem in coreset.
arXiv Detail & Related papers (2024-03-21T08:49:34Z) - Adaptive Strategies in Non-convex Optimization [5.279475826661643]
An algorithm is said to be adaptive to a certain parameter if it does not need a priori knowledge of such a parameter.
This dissertation presents our work on adaptive algorithms in three scenarios.
arXiv Detail & Related papers (2023-06-17T06:52:05Z) - Optimal Algorithms for the Inhomogeneous Spiked Wigner Model [89.1371983413931]
We derive an approximate message-passing algorithm (AMP) for the inhomogeneous problem.
We identify in particular the existence of a statistical-to-computational gap where known algorithms require a signal-to-noise ratio bigger than the information-theoretic threshold to perform better than random.
arXiv Detail & Related papers (2023-02-13T19:57:17Z) - Improve Noise Tolerance of Robust Loss via Noise-Awareness [60.34670515595074]
We propose a meta-learning method which is capable of adaptively learning a hyper parameter prediction function, called Noise-Aware-Robust-Loss-Adjuster (NARL-Adjuster for brevity)
Four SOTA robust loss functions are attempted to be integrated with our algorithm, and comprehensive experiments substantiate the general availability and effectiveness of the proposed method in both its noise tolerance and performance.
arXiv Detail & Related papers (2023-01-18T04:54:58Z) - Benchmarking common uncertainty estimation methods with
histopathological images under domain shift and label noise [62.997667081978825]
In high-risk environments, deep learning models need to be able to judge their uncertainty and reject inputs when there is a significant chance of misclassification.
We conduct a rigorous evaluation of the most commonly used uncertainty and robustness methods for the classification of Whole Slide Images.
We observe that ensembles of methods generally lead to better uncertainty estimates as well as an increased robustness towards domain shifts and label noise.
arXiv Detail & Related papers (2023-01-03T11:34:36Z) - Partial Identification with Noisy Covariates: A Robust Optimization
Approach [94.10051154390237]
Causal inference from observational datasets often relies on measuring and adjusting for covariates.
We show that this robust optimization approach can extend a wide range of causal adjustment methods to perform partial identification.
Across synthetic and real datasets, we find that this approach provides ATE bounds with a higher coverage probability than existing methods.
arXiv Detail & Related papers (2022-02-22T04:24:26Z) - Learning Noise Transition Matrix from Only Noisy Labels via Total
Variation Regularization [88.91872713134342]
We propose a theoretically grounded method that can estimate the noise transition matrix and learn a classifier simultaneously.
We show the effectiveness of the proposed method through experiments on benchmark and real-world datasets.
arXiv Detail & Related papers (2021-02-04T05:09:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.