Realizing Fully-Integrated, Low-Power, Event-Based Pupil Tracking with Neuromorphic Hardware
- URL: http://arxiv.org/abs/2511.20175v1
- Date: Tue, 25 Nov 2025 10:58:23 GMT
- Title: Realizing Fully-Integrated, Low-Power, Event-Based Pupil Tracking with Neuromorphic Hardware
- Authors: Federico Paredes-Valles, Yoshitaka Miyatani, Kirk Y. W. Scheper,
- Abstract summary: We present the first battery-powered, wearable pupil-center-tracking system with complete on-device integration.<n>Our solution features a novel uncertainty-quantifying spiking neural network with gated temporal decoding, optimized for strict memory and bandwidth constraints.<n>Our work demonstrates that end-to-end neuromorphic computing enables practical, always-on eye tracking for next-generation energy-efficient wearable systems.
- Score: 2.2940141855172036
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Eye tracking is fundamental to numerous applications, yet achieving robust, high-frequency tracking with ultra-low power consumption remains challenging for wearable platforms. While event-based vision sensors offer microsecond resolution and sparse data streams, they have lacked fully integrated, low-power processing solutions capable of real-time inference. In this work, we present the first battery-powered, wearable pupil-center-tracking system with complete on-device integration, combining event-based sensing and neuromorphic processing on the commercially available Speck2f system-on-chip with lightweight coordinate decoding on a low-power microcontroller. Our solution features a novel uncertainty-quantifying spiking neural network with gated temporal decoding, optimized for strict memory and bandwidth constraints, complemented by systematic deployment mechanisms that bridge the reality gap. We validate our system on a new multi-user dataset and demonstrate a wearable prototype with dual neuromorphic devices achieving robust binocular pupil tracking at 100 Hz with an average power consumption below 5 mW per eye. Our work demonstrates that end-to-end neuromorphic computing enables practical, always-on eye tracking for next-generation energy-efficient wearable systems.
Related papers
- MOTION: ML-Assisted On-Device Low-Latency Motion Recognition [5.0385144315892925]
We use WeBe Band, a multi-sensor wearable device equipped with a powerful enough MCU to effectively perform gesture recognition entirely on the device.<n>We found that the neural network provided the best balance between accuracy, latency, and memory use.<n>Our results also demonstrate that reliable real-time gesture recognition can be achieved in WeBe Band, with great potential for real-time medical monitoring solutions.
arXiv Detail & Related papers (2025-10-14T01:15:47Z) - Co-designing a Sub-millisecond Latency Event-based Eye Tracking System with Submanifold Sparse CNN [8.613703056677457]
Eye-tracking technology is integral to numerous consumer electronics applications, particularly in virtual and augmented reality (VR/AR)
Yet, achieving optimal performance across all these fronts presents a formidable challenge.
We tackle this challenge through a synergistic software/ hardware co-design of the system with an event camera.
Our system achieves 81% p5 accuracy, 99.5% p10 accuracy, and 3.71 Meanean Distance with 0.7 ms latency while only consuming 2.29 mJ per inference.
arXiv Detail & Related papers (2024-04-22T15:28:42Z) - PNAS-MOT: Multi-Modal Object Tracking with Pareto Neural Architecture Search [64.28335667655129]
Multiple object tracking is a critical task in autonomous driving.
As tracking accuracy improves, neural networks become increasingly complex, posing challenges for their practical application in real driving scenarios due to the high level of latency.
In this paper, we explore the use of the neural architecture search (NAS) methods to search for efficient architectures for tracking, aiming for low real-time latency while maintaining relatively high accuracy.
arXiv Detail & Related papers (2024-03-23T04:18:49Z) - Low-power event-based face detection with asynchronous neuromorphic
hardware [2.0774873363739985]
We present the first instance of an on-chip spiking neural network for event-based face detection deployed on the SynSense Speck neuromorphic chip.
We show how to reduce precision discrepancies between off-chip clock-driven simulation used for training and on-chip event-driven inference.
We achieve an on-chip face detection mAP[0.5] of 0.6 while consuming only 20 mW.
arXiv Detail & Related papers (2023-12-21T19:23:02Z) - Random resistive memory-based deep extreme point learning machine for
unified visual processing [67.51600474104171]
We propose a novel hardware-software co-design, random resistive memory-based deep extreme point learning machine (DEPLM)
Our co-design system achieves huge energy efficiency improvements and training cost reduction when compared to conventional systems.
arXiv Detail & Related papers (2023-12-14T09:46:16Z) - Neuromorphic Optical Flow and Real-time Implementation with Event
Cameras [47.11134388304464]
We build on the latest developments in event-based vision and spiking neural networks.
We propose a new network architecture that improves the state-of-the-art self-supervised optical flow accuracy.
We demonstrate high speed optical flow prediction with almost two orders of magnitude reduced complexity.
arXiv Detail & Related papers (2023-04-14T14:03:35Z) - Fluid Batching: Exit-Aware Preemptive Serving of Early-Exit Neural
Networks on Edge NPUs [74.83613252825754]
"smart ecosystems" are being formed where sensing happens concurrently rather than standalone.
This is shifting the on-device inference paradigm towards deploying neural processing units (NPUs) at the edge.
We propose a novel early-exit scheduling that allows preemption at run time to account for the dynamicity introduced by the arrival and exiting processes.
arXiv Detail & Related papers (2022-09-27T15:04:01Z) - Many-to-One Knowledge Distillation of Real-Time Epileptic Seizure
Detection for Low-Power Wearable Internet of Things Systems [6.90334498220711]
Integrating low-power wearable Internet of Things systems into routine health monitoring is an ongoing challenge.
Recent advances in computation capabilities of wearables make it possible to target complex scenarios by exploiting multiple biosignals.
Physically larger and multi-biosignal-based wearables bring significant discomfort to the patients.
We propose a many-to-one signals knowledge distillation approach targeting single-biosignal processing in IoT wearable systems.
arXiv Detail & Related papers (2022-07-20T12:22:26Z) - Single-Shot Optical Neural Network [55.41644538483948]
'Weight-stationary' analog optical and electronic hardware has been proposed to reduce the compute resources required by deep neural networks.
We present a scalable, single-shot-per-layer weight-stationary optical processor.
arXiv Detail & Related papers (2022-05-18T17:49:49Z) - FPGA-optimized Hardware acceleration for Spiking Neural Networks [69.49429223251178]
This work presents the development of a hardware accelerator for an SNN, with off-line training, applied to an image recognition task.
The design targets a Xilinx Artix-7 FPGA, using in total around the 40% of the available hardware resources.
It reduces the classification time by three orders of magnitude, with a small 4.5% impact on the accuracy, if compared to its software, full precision counterpart.
arXiv Detail & Related papers (2022-01-18T13:59:22Z) - An adaptive cognitive sensor node for ECG monitoring in the Internet of
Medical Things [0.7646713951724011]
The Internet of Medical Things (IoMT) paradigm is becoming mainstream in multiple clinical trials and healthcare procedures.
In this work, we explore the implementation of cognitive data analysis algorithm on resource-constrained computing platforms.
We have assessed our approach on a use-case using a convolutional neural network to classify electrocardiogram traces.
arXiv Detail & Related papers (2021-06-11T16:49:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.