Neuromorphic Optical Flow and Real-time Implementation with Event
Cameras
- URL: http://arxiv.org/abs/2304.07139v2
- Date: Wed, 12 Jul 2023 13:57:23 GMT
- Title: Neuromorphic Optical Flow and Real-time Implementation with Event
Cameras
- Authors: Yannick Schnider, Stanislaw Wozniak, Mathias Gehrig, Jules Lecomte,
Axel von Arnim, Luca Benini, Davide Scaramuzza, Angeliki Pantazi
- Abstract summary: We build on the latest developments in event-based vision and spiking neural networks.
We propose a new network architecture that improves the state-of-the-art self-supervised optical flow accuracy.
We demonstrate high speed optical flow prediction with almost two orders of magnitude reduced complexity.
- Score: 47.11134388304464
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Optical flow provides information on relative motion that is an important
component in many computer vision pipelines. Neural networks provide high
accuracy optical flow, yet their complexity is often prohibitive for
application at the edge or in robots, where efficiency and latency play crucial
role. To address this challenge, we build on the latest developments in
event-based vision and spiking neural networks. We propose a new network
architecture, inspired by Timelens, that improves the state-of-the-art
self-supervised optical flow accuracy when operated both in spiking and
non-spiking mode. To implement a real-time pipeline with a physical event
camera, we propose a methodology for principled model simplification based on
activity and latency analysis. We demonstrate high speed optical flow
prediction with almost two orders of magnitude reduced complexity while
maintaining the accuracy, opening the path for real-time deployments.
Related papers
- SDformerFlow: Spatiotemporal swin spikeformer for event-based optical flow estimation [10.696635172502141]
Event cameras generate asynchronous and sparse event streams capturing changes in light intensity.
Spiking neural networks (SNNs) share similar asynchronous and sparse characteristics and are well-suited for event cameras.
We propose two solutions for fast and robust optical flow estimation for event cameras: STTFlowNet and SDFlowformer.
arXiv Detail & Related papers (2024-09-06T07:48:18Z) - Optical training of large-scale Transformers and deep neural networks with direct feedback alignment [48.90869997343841]
We experimentally implement a versatile and scalable training algorithm, called direct feedback alignment, on a hybrid electronic-photonic platform.
An optical processing unit performs large-scale random matrix multiplications, which is the central operation of this algorithm, at speeds up to 1500 TeraOps.
We study the compute scaling of our hybrid optical approach, and demonstrate a potential advantage for ultra-deep and wide neural networks.
arXiv Detail & Related papers (2024-09-01T12:48:47Z) - PNAS-MOT: Multi-Modal Object Tracking with Pareto Neural Architecture Search [64.28335667655129]
Multiple object tracking is a critical task in autonomous driving.
As tracking accuracy improves, neural networks become increasingly complex, posing challenges for their practical application in real driving scenarios due to the high level of latency.
In this paper, we explore the use of the neural architecture search (NAS) methods to search for efficient architectures for tracking, aiming for low real-time latency while maintaining relatively high accuracy.
arXiv Detail & Related papers (2024-03-23T04:18:49Z) - Lightweight Delivery Detection on Doorbell Cameras [9.735137325682825]
In this work we investigate an important home application, video based delivery detection, and present a simple lightweight pipeline for this task.
Our method relies on motionconstrained to generate a set of coarse activity cues followed by their classification with a mobile-friendly 3DCNN network.
arXiv Detail & Related papers (2023-05-13T01:28:28Z) - Optical flow estimation from event-based cameras and spiking neural
networks [0.4899818550820575]
Event-based sensors are an excellent fit for Spiking Neural Networks (SNNs)
We propose a U-Net-like SNN which, after supervised training, is able to make dense optical flow estimations.
Thanks to separable convolutions, we have been able to develop a light model that can nonetheless yield reasonably accurate optical flow estimates.
arXiv Detail & Related papers (2023-02-13T16:17:54Z) - Spatio-Temporal Recurrent Networks for Event-Based Optical Flow
Estimation [47.984368369734995]
We introduce a novel recurrent encoding-decoding neural network architecture for event-based optical flow estimation.
The network is end-to-end trained with self-supervised learning on the Multi-Vehicle Stereo Event Camera dataset.
We have shown that it outperforms all the existing state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2021-09-10T13:37:37Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - Optical Flow Estimation from a Single Motion-blurred Image [66.2061278123057]
Motion blur in an image may have practical interests in fundamental computer vision problems.
We propose a novel framework to estimate optical flow from a single motion-blurred image in an end-to-end manner.
arXiv Detail & Related papers (2021-03-04T12:45:18Z) - Back to Event Basics: Self-Supervised Learning of Image Reconstruction
for Event Cameras via Photometric Constancy [0.0]
Event cameras are novel vision sensors that sample, in an asynchronous fashion, brightness increments with low latency and high temporal resolution.
We propose a novel, lightweight neural network for optical flow estimation that achieves high speed inference with only a minor drop in performance.
Results across multiple datasets show that the performance of the proposed self-supervised approach is in line with the state-of-the-art.
arXiv Detail & Related papers (2020-09-17T13:30:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.