Neuromorphic Eye Tracking for Low-Latency Pupil Detection
- URL: http://arxiv.org/abs/2512.09969v1
- Date: Wed, 10 Dec 2025 11:30:21 GMT
- Title: Neuromorphic Eye Tracking for Low-Latency Pupil Detection
- Authors: Paul Hueber, Luca Peres, Florian Pitters, Alejandro Gloriani, Oliver Rhodes,
- Abstract summary: Eye tracking for wearable systems demands low latency and milliwatt-level power.<n>Neuromorphic sensors and spiking neural networks (SNNs) offer a promising alternative.<n>This paper presents a neuromorphic version of top-performing event-based eye-tracking models.
- Score: 37.091037454305024
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Eye tracking for wearable systems demands low latency and milliwatt-level power, but conventional frame-based pipelines struggle with motion blur, high compute cost, and limited temporal resolution. Such capabilities are vital for enabling seamless and responsive interaction in emerging technologies like augmented reality (AR) and virtual reality (VR), where understanding user gaze is key to immersion and interface design. Neuromorphic sensors and spiking neural networks (SNNs) offer a promising alternative, yet existing SNN approaches are either too specialized or fall short of the performance of modern ANN architectures. This paper presents a neuromorphic version of top-performing event-based eye-tracking models, replacing their recurrent and attention modules with lightweight LIF layers and exploiting depth-wise separable convolutions to reduce model complexity. Our models obtain 3.7-4.1px mean error, approaching the accuracy of the application-specific neuromorphic system, Retina (3.24px), while reducing model size by 20x and theoretical compute by 850x, compared to the closest ANN variant of the proposed model. These efficient variants are projected to operate at an estimated 3.9-4.9 mW with 3 ms latency at 1 kHz. The present results indicate that high-performing event-based eye-tracking architectures can be redesigned as SNNs with substantial efficiency gains, while retaining accuracy suitable for real-time wearable deployment.
Related papers
- One-Timestep is Enough: Achieving High-performance ANN-to-SNN Conversion via Scale-and-Fire Neurons [7.289542889212981]
Spiking Neural Networks (SNNs) are energy-efficient alternatives to Artificial Neural Networks (ANNs)<n>We propose a theoretical and practical framework for single-timestep ANN2SNN.<n>We achieve 88.8% top-1 accuracy on ImageNet-1K at $T=1$, surpassing existing conversion methods.
arXiv Detail & Related papers (2025-10-27T14:35:14Z) - GazeSCRNN: Event-based Near-eye Gaze Tracking using a Spiking Neural Network [0.0]
This work introduces GazeSCRNN, a novel convolutional recurrent neural network designed for event-based near-eye gaze tracking.<n>Model processes event streams from DVS cameras using Adaptive Leaky-Integrate-and-Fire (ALIF) neurons and a hybrid architecture for-temporal data.<n>The most accurate model achieved a Mean Angle Error (MAE) of 6.034degdeg and a Mean Pupil Error (MPE) of 2.094 mm.
arXiv Detail & Related papers (2025-03-20T10:32:15Z) - Activation-wise Propagation: A Universal Strategy to Break Timestep Constraints in Spiking Neural Networks for 3D Data Processing [29.279985043923386]
We introduce Activation-wise Membrane Potential Propagation (AMP2), a novel state update mechanism for spiking neurons.<n>Inspired by skip connections in deep networks, AMP2 incorporates the membrane potential of neurons into network, eliminating the need for iterative updates.<n>Our method achieves significant improvements across various 3D modalities, including 3D point clouds and event streams.
arXiv Detail & Related papers (2025-02-18T11:52:25Z) - Adaptive Calibration: A Unified Conversion Framework of Spiking Neural Network [1.5215973379400674]
Spiking Neural Networks (SNNs) are seen as an energy-efficient alternative to traditional Artificial Neural Networks (ANNs)<n>We present a unified training-free conversion framework that significantly enhances both the performance and efficiency of converted SNNs.
arXiv Detail & Related papers (2024-12-18T09:38:54Z) - Task-Oriented Real-time Visual Inference for IoVT Systems: A Co-design Framework of Neural Networks and Edge Deployment [61.20689382879937]
Task-oriented edge computing addresses this by shifting data analysis to the edge.
Existing methods struggle to balance high model performance with low resource consumption.
We propose a novel co-design framework to optimize neural network architecture.
arXiv Detail & Related papers (2024-10-29T19:02:54Z) - LaneSNNs: Spiking Neural Networks for Lane Detection on the Loihi
Neuromorphic Processor [12.47874622269824]
We present a new SNN-based approach, called LaneSNN, for detecting the lanes marked on the streets using the event-based camera input.
We implement and map the learned SNNs models onto the Intel Loihi Neuromorphic Research Chip.
For the loss function, we develop a novel method based on the linear composition of Weighted binary Cross Entropy (WCE) and Mean Squared Error (MSE) measures.
arXiv Detail & Related papers (2022-08-03T14:51:15Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - Hybrid SNN-ANN: Energy-Efficient Classification and Object Detection for
Event-Based Vision [64.71260357476602]
Event-based vision sensors encode local pixel-wise brightness changes in streams of events rather than image frames.
Recent progress in object recognition from event-based sensors has come from conversions of deep neural networks.
We propose a hybrid architecture for end-to-end training of deep neural networks for event-based pattern recognition and object detection.
arXiv Detail & Related papers (2021-12-06T23:45:58Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.