Dynamic Resource-aware Corner Detection for Bio-inspired Vision Sensors
- URL: http://arxiv.org/abs/2010.15507v1
- Date: Thu, 29 Oct 2020 12:01:33 GMT
- Title: Dynamic Resource-aware Corner Detection for Bio-inspired Vision Sensors
- Authors: Sherif A.S. Mohamed, Jawad N. Yasin, Mohammad-hashem Haghbayan,
Antonio Miele, Jukka Heikkonen, Hannu Tenhunen, and Juha Plosila
- Abstract summary: We present an algorithm to detect asynchronous corners from a stream of events in real-time on embedded systems.
The proposed algorithm is capable of selecting the best corner candidate among neighbors and achieves an average execution time savings of 59 %.
- Score: 0.9988653233188148
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event-based cameras are vision devices that transmit only brightness changes
with low latency and ultra-low power consumption. Such characteristics make
event-based cameras attractive in the field of localization and object tracking
in resource-constrained systems. Since the number of generated events in such
cameras is huge, the selection and filtering of the incoming events are
beneficial from both increasing the accuracy of the features and reducing the
computational load. In this paper, we present an algorithm to detect
asynchronous corners from a stream of events in real-time on embedded systems.
The algorithm is called the Three Layer Filtering-Harris or TLF-Harris
algorithm. The algorithm is based on an events' filtering strategy whose
purpose is 1) to increase the accuracy by deliberately eliminating some
incoming events, i.e., noise, and 2) to improve the real-time performance of
the system, i.e., preserving a constant throughput in terms of input events per
second, by discarding unnecessary events with a limited accuracy loss. An
approximation of the Harris algorithm, in turn, is used to exploit its
high-quality detection capability with a low-complexity implementation to
enable seamless real-time performance on embedded computing platforms. The
proposed algorithm is capable of selecting the best corner candidate among
neighbors and achieves an average execution time savings of 59 % compared with
the conventional Harris score. Moreover, our approach outperforms the competing
methods, such as eFAST, eHarris, and FA-Harris, in terms of real-time
performance, and surpasses Arc* in terms of accuracy.
Related papers
- Noise Filtering Benchmark for Neuromorphic Satellites Observations [39.781091151259766]
Event cameras capture sparse, asynchronous brightness changes which offer high temporal resolution, high dynamic range, low power consumption, and sparse data output.
These advantages make them ideal for Space Situational Awareness, particularly in detecting resident space objects moving within a telescope's field of view.
However, the output from event cameras often includes substantial background activity noise, which is known to be more prevalent in low-light conditions.
This noise can overwhelm the sparse events generated by satellite signals, making detection and tracking more challenging.
arXiv Detail & Related papers (2024-11-18T02:02:24Z) - Dual Memory Aggregation Network for Event-Based Object Detection with
Learnable Representation [79.02808071245634]
Event-based cameras are bio-inspired sensors that capture brightness change of every pixel in an asynchronous manner.
Event streams are divided into grids in the x-y-t coordinates for both positive and negative polarity, producing a set of pillars as 3D tensor representation.
Long memory is encoded in the hidden state of adaptive convLSTMs while short memory is modeled by computing spatial-temporal correlation between event pillars.
arXiv Detail & Related papers (2023-03-17T12:12:41Z) - ETAD: A Unified Framework for Efficient Temporal Action Detection [70.21104995731085]
Untrimmed video understanding such as temporal action detection (TAD) often suffers from the pain of huge demand for computing resources.
We build a unified framework for efficient end-to-end temporal action detection (ETAD)
ETAD achieves state-of-the-art performance on both THUMOS-14 and ActivityNet-1.3.
arXiv Detail & Related papers (2022-05-14T21:16:21Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Neuromorphic Camera Denoising using Graph Neural Network-driven
Transformers [3.805262583092311]
Neuromorphic vision is a bio-inspired technology that has triggered a paradigm shift in the computer-vision community.
Neuromorphic cameras suffer from significant amounts of measurement noise.
This noise deteriorates the performance of neuromorphic event-based perception and navigation algorithms.
arXiv Detail & Related papers (2021-12-17T18:57:36Z) - hARMS: A Hardware Acceleration Architecture for Real-Time Event-Based
Optical Flow [0.0]
Event-based vision sensors produce asynchronous event streams with high temporal resolution based on changes in the visual scene.
Existing solutions for calculating optical flow from event data fail to capture the true direction of motion due to the aperture problem.
We present a hardware realization of the fARMS algorithm allowing for real-time computation of true flow on low-power, embedded platforms.
arXiv Detail & Related papers (2021-12-13T16:27:17Z) - luvHarris: A Practical Corner Detector for Event-cameras [3.5097082077065]
Event-driven computer vision has become more accessible.
Current state-of-the-art have either unsatisfactory accuracy or real-time performance when considered for practical use.
We present yet another method to perform corner detection, dubbed look-up event-Harris (luvHarris)
arXiv Detail & Related papers (2021-05-24T17:54:06Z) - SE-Harris and eSUSAN: Asynchronous Event-Based Corner Detection Using
Megapixel Resolution CeleX-V Camera [9.314068908300285]
Event cameras generate an asynchronous event stream of per-pixel intensity changes with precise timestamps.
We propose a corner detection algorithm, eSUSAN, inspired by the conventional SUSAN (smallest univalue segment assimilating nucleus) algorithm for corner detection.
We also propose the SE-Harris corner detector, which uses adaptive normalization based on exponential decay to quickly construct a local surface of active events.
arXiv Detail & Related papers (2021-05-02T14:06:28Z) - FastFlowNet: A Lightweight Network for Fast Optical Flow Estimation [81.76975488010213]
Dense optical flow estimation plays a key role in many robotic vision tasks.
Current networks often occupy large number of parameters and require heavy computation costs.
Our proposed FastFlowNet works in the well-known coarse-to-fine manner with following innovations.
arXiv Detail & Related papers (2021-03-08T03:09:37Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z) - Event-based Asynchronous Sparse Convolutional Networks [54.094244806123235]
Event cameras are bio-inspired sensors that respond to per-pixel brightness changes in the form of asynchronous and sparse "events"
We present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output.
We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks.
arXiv Detail & Related papers (2020-03-20T08:39:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.