Pushing the boundaries of event subsampling in event-based video classification using CNNs
- URL: http://arxiv.org/abs/2409.08953v1
- Date: Fri, 13 Sep 2024 16:14:45 GMT
- Title: Pushing the boundaries of event subsampling in event-based video classification using CNNs
- Authors: Hesam Araghi, Jan van Gemert, Nergis Tomen,
- Abstract summary: In edge AI applications, determining the minimum amount of events for specific tasks can allow reducing the event rate to improve bandwidth, memory, and processing efficiency.
We study the effect of event subsampling on the accuracy of event data classification using convolutional neural network (CNN) models.
- Score: 13.283434521851998
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Event cameras offer low-power visual sensing capabilities ideal for edge-device applications. However, their high event rate, driven by high temporal details, can be restrictive in terms of bandwidth and computational resources. In edge AI applications, determining the minimum amount of events for specific tasks can allow reducing the event rate to improve bandwidth, memory, and processing efficiency. In this paper, we study the effect of event subsampling on the accuracy of event data classification using convolutional neural network (CNN) models. Surprisingly, across various datasets, the number of events per video can be reduced by an order of magnitude with little drop in accuracy, revealing the extent to which we can push the boundaries in accuracy vs. event rate trade-off. Additionally, we also find that lower classification accuracy in high subsampling rates is not solely attributable to information loss due to the subsampling of the events, but that the training of CNNs can be challenging in highly subsampled scenarios, where the sensitivity to hyperparameters increases. We quantify training instability across multiple event-based classification datasets using a novel metric for evaluating the hyperparameter sensitivity of CNNs in different subsampling settings. Finally, we analyze the weight gradients of the network to gain insight into this instability.
Related papers
- Making Every Event Count: Balancing Data Efficiency and Accuracy in Event Camera Subsampling [13.283434521851998]
Event cameras offer high temporal resolution and power efficiency, making them well-suited for edge AI applications.<n>Subsampling methods provide a practical solution, but their effect on downstream visual tasks remains underexplored.<n>We evaluate six hardware-friendly subsampling methods for event video classification on various benchmark datasets.
arXiv Detail & Related papers (2025-05-27T13:37:08Z) - Enhanced Neuromorphic Semantic Segmentation Latency through Stream Event [0.0]
Achieving optimal semantic segmentation with frame-based vision sensors poses significant challenges for real-time systems like UAVs and self-driving cars.
We leverage event streams from event-based cameras-bio-inspired sensors that trigger events in response to changes in the scene.
We exploit this event information to solve the semantic segmentation task by employing a Spiking Neural Network (SNN), a bio-inspired computing paradigm known for its low energy consumption.
arXiv Detail & Related papers (2025-02-26T09:43:18Z) - Spiking Neural Network as Adaptive Event Stream Slicer [10.279359105384334]
Event-based cameras provide rich edge information, high dynamic range, and high temporal resolution.
Many state-of-the-art event-based algorithms rely on splitting the events into fixed groups, resulting in the omission of crucial temporal information.
We propose SpikeSlicer, a novel-designed plug-and-play event processing method capable of splitting events stream adaptively.
arXiv Detail & Related papers (2024-10-03T06:41:10Z) - Representation Learning on Event Stream via an Elastic Net-incorporated
Tensor Network [1.9515859963221267]
We present a novel representation method which can capture global correlations of all events in the event stream simultaneously.
Our method can achieve effective results in applications like filtering noise compared with the state-of-the-art methods.
arXiv Detail & Related papers (2024-01-16T02:51:47Z) - Implicit Event-RGBD Neural SLAM [54.74363487009845]
Implicit neural SLAM has achieved remarkable progress recently.
Existing methods face significant challenges in non-ideal scenarios.
We propose EN-SLAM, the first event-RGBD implicit neural SLAM framework.
arXiv Detail & Related papers (2023-11-18T08:48:58Z) - EvDNeRF: Reconstructing Event Data with Dynamic Neural Radiance Fields [80.94515892378053]
EvDNeRF is a pipeline for generating event data and training an event-based dynamic NeRF.
NeRFs offer geometric-based learnable rendering, but prior work with events has only considered reconstruction of static scenes.
We show that by training on varied batch sizes of events, we can improve test-time predictions of events at fine time resolutions.
arXiv Detail & Related papers (2023-10-03T21:08:41Z) - EV-Catcher: High-Speed Object Catching Using Low-latency Event-based
Neural Networks [107.62975594230687]
We demonstrate an application where event cameras excel: accurately estimating the impact location of fast-moving objects.
We introduce a lightweight event representation called Binary Event History Image (BEHI) to encode event data at low latency.
We show that the system is capable of achieving a success rate of 81% in catching balls targeted at different locations, with a velocity of up to 13 m/s even on compute-constrained embedded platforms.
arXiv Detail & Related papers (2023-04-14T15:23:28Z) - Optical flow estimation from event-based cameras and spiking neural
networks [0.4899818550820575]
Event-based sensors are an excellent fit for Spiking Neural Networks (SNNs)
We propose a U-Net-like SNN which, after supervised training, is able to make dense optical flow estimations.
Thanks to separable convolutions, we have been able to develop a light model that can nonetheless yield reasonably accurate optical flow estimates.
arXiv Detail & Related papers (2023-02-13T16:17:54Z) - Efficient Graph Neural Network Inference at Large Scale [54.89457550773165]
Graph neural networks (GNNs) have demonstrated excellent performance in a wide range of applications.
Existing scalable GNNs leverage linear propagation to preprocess the features and accelerate the training and inference procedure.
We propose a novel adaptive propagation order approach that generates the personalized propagation order for each node based on its topological information.
arXiv Detail & Related papers (2022-11-01T14:38:18Z) - Ultra-low Latency Spiking Neural Networks with Spatio-Temporal
Compression and Synaptic Convolutional Block [4.081968050250324]
Spiking neural networks (SNNs) have neuro-temporal information capability, low processing feature, and high biological plausibility.
Neuro-MNIST, CIFAR10-S, DVS128 gesture datasets need to aggregate individual events into frames with a higher temporal resolution for event stream classification.
We propose a processing-temporal compression method to aggregate individual events into a few time steps of NIST current to reduce the training and inference latency.
arXiv Detail & Related papers (2022-03-18T15:14:13Z) - Temporal-wise Attention Spiking Neural Networks for Event Streams
Classification [6.623034896340885]
Spiking neural network (SNN) is a brain-triggered event-triggered computing model.
In this work, we propose a temporal-wise attention SNN model to learn frame-based representation for processing event streams.
We demonstrate that TA-SNN models improve the accuracy of event streams classification tasks.
arXiv Detail & Related papers (2021-07-25T02:28:44Z) - Curriculum By Smoothing [52.08553521577014]
Convolutional Neural Networks (CNNs) have shown impressive performance in computer vision tasks such as image classification, detection, and segmentation.
We propose an elegant curriculum based scheme that smoothes the feature embedding of a CNN using anti-aliasing or low-pass filters.
As the amount of information in the feature maps increases during training, the network is able to progressively learn better representations of the data.
arXiv Detail & Related papers (2020-03-03T07:27:44Z) - Approximation and Non-parametric Estimation of ResNet-type Convolutional
Neural Networks [52.972605601174955]
We show a ResNet-type CNN can attain the minimax optimal error rates in important function classes.
We derive approximation and estimation error rates of the aformentioned type of CNNs for the Barron and H"older classes.
arXiv Detail & Related papers (2019-03-24T19:42:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.