Neural Ganglion Sensors: Learning Task-specific Event Cameras Inspired by the Neural Circuit of the Human Retina
- URL: http://arxiv.org/abs/2504.13457v1
- Date: Fri, 18 Apr 2025 04:22:58 GMT
- Title: Neural Ganglion Sensors: Learning Task-specific Event Cameras Inspired by the Neural Circuit of the Human Retina
- Authors: Haley M. So, Gordon Wetzstein,
- Abstract summary: We introduce Neural Ganglion Sensors, an extension of traditional event cameras.<n>Our results demonstrate that our biologically inspired sensing improves performance relative to conventional event cameras.<n>These findings highlight the promise of RGC-inspired event sensors for edge devices and other low-power, real-time applications.
- Score: 35.26330639016294
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Inspired by the data-efficient spiking mechanism of neurons in the human eye, event cameras were created to achieve high temporal resolution with minimal power and bandwidth requirements by emitting asynchronous, per-pixel intensity changes rather than conventional fixed-frame rate images. Unlike retinal ganglion cells (RGCs) in the human eye, however, which integrate signals from multiple photoreceptors within a receptive field to extract spatio-temporal features, conventional event cameras do not leverage local spatial context when deciding which events to fire. Moreover, the eye contains around 20 different kinds of RGCs operating in parallel, each attuned to different features or conditions. Inspired by this biological design, we introduce Neural Ganglion Sensors, an extension of traditional event cameras that learns task-specific spatio-temporal retinal kernels (i.e., RGC "events"). We evaluate our design on two challenging tasks: video interpolation and optical flow. Our results demonstrate that our biologically inspired sensing improves performance relative to conventional event cameras while reducing overall event bandwidth. These findings highlight the promise of RGC-inspired event sensors for edge devices and other low-power, real-time applications requiring efficient, high-resolution visual streams.
Related papers
- EyeTrAES: Fine-grained, Low-Latency Eye Tracking via Adaptive Event Slicing [2.9795443606634917]
EyeTrAES is a novel approach using neuromorphic event cameras for high-fidelity tracking of natural pupillary movement.
We show that EyeTrAES boosts pupil tracking fidelity by 6+%, achieving IoU=92%, while incurring at least 3x lower latency than competing pure event-based eye tracking alternatives.
For robust user authentication, we train a lightweight per-user Random Forest classifier using a novel feature vector of short-term pupillary kinematics.
arXiv Detail & Related papers (2024-09-27T15:06:05Z) - Retina-Inspired Object Motion Segmentation for Event-Cameras [0.0]
Event-cameras have emerged as a revolutionary technology with a high temporal resolution that far surpasses standard active pixel cameras.<n>This research showcases the potential of additional retinal functionalities to extract visual features.
arXiv Detail & Related papers (2024-08-18T12:28:26Z) - In the Blink of an Eye: Event-based Emotion Recognition [44.12621619057609]
We introduce a wearable single-eye emotion recognition device and a real-time approach to recognizing emotions from partial observations of an emotion.
At the heart of our method is a bio-inspired event-based camera setup and a newly designed lightweight Spiking Eye Emotion Network (SEEN)
arXiv Detail & Related papers (2023-10-06T06:33:20Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - Object Motion Sensitivity: A Bio-inspired Solution to the Ego-motion
Problem for Event-based Cameras [0.0]
We highlight the capability of the second generation of neuromorphic image sensors, Integrated Retinal Functionality in CMOS Image Sensors (IRIS)
IRIS aims to mimic full retinal computations from photoreceptors to output of the retina for targeted feature-extraction.
Our results show that OMS can accomplish standard computer vision tasks with similar efficiency to conventional RGB and DVS solutions but offers drastic bandwidth reduction.
arXiv Detail & Related papers (2023-03-24T16:22:06Z) - Combining Events and Frames using Recurrent Asynchronous Multimodal
Networks for Monocular Depth Prediction [51.072733683919246]
We introduce Recurrent Asynchronous Multimodal (RAM) networks to handle asynchronous and irregular data from multiple sensors.
Inspired by traditional RNNs, RAM networks maintain a hidden state that is updated asynchronously and can be queried at any time to generate a prediction.
We show an improvement over state-of-the-art methods by up to 30% in terms of mean depth absolute error.
arXiv Detail & Related papers (2021-02-18T13:24:35Z) - Real-Time Face & Eye Tracking and Blink Detection using Event Cameras [3.842206880015537]
Event cameras contain emerging, neuromorphic vision sensors that capture local light intensity changes at each pixel, generating a stream of asynchronous events.
Driver monitoring systems (DMS) are in-cabin safety systems designed to sense and understand a drivers physical and cognitive state.
This paper proposes a novel method to simultaneously detect and track faces and eyes for driver monitoring.
arXiv Detail & Related papers (2020-10-16T10:02:41Z) - Event-based Asynchronous Sparse Convolutional Networks [54.094244806123235]
Event cameras are bio-inspired sensors that respond to per-pixel brightness changes in the form of asynchronous and sparse "events"
We present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output.
We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks.
arXiv Detail & Related papers (2020-03-20T08:39:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.