SE-Harris and eSUSAN: Asynchronous Event-Based Corner Detection Using
Megapixel Resolution CeleX-V Camera
- URL: http://arxiv.org/abs/2105.00480v1
- Date: Sun, 2 May 2021 14:06:28 GMT
- Title: SE-Harris and eSUSAN: Asynchronous Event-Based Corner Detection Using
Megapixel Resolution CeleX-V Camera
- Authors: Jinjian Li, Chuandong Guo, Li Su, Xiangyu Wang, Quan Hu
- Abstract summary: Event cameras generate an asynchronous event stream of per-pixel intensity changes with precise timestamps.
We propose a corner detection algorithm, eSUSAN, inspired by the conventional SUSAN (smallest univalue segment assimilating nucleus) algorithm for corner detection.
We also propose the SE-Harris corner detector, which uses adaptive normalization based on exponential decay to quickly construct a local surface of active events.
- Score: 9.314068908300285
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras are novel neuromorphic vision sensors with ultrahigh temporal
resolution and low latency, both in the order of microseconds. Instead of image
frames, event cameras generate an asynchronous event stream of per-pixel
intensity changes with precise timestamps. The resulting sparse data structure
impedes applying many conventional computer vision techniques to event streams,
and specific algorithms should be designed to leverage the information provided
by event cameras. We propose a corner detection algorithm, eSUSAN, inspired by
the conventional SUSAN (smallest univalue segment assimilating nucleus)
algorithm for corner detection. The proposed eSUSAN extracts the univalue
segment assimilating nucleus from the circle kernel based on the similarity
across timestamps and distinguishes corner events by the number of pixels in
the nucleus area. Moreover, eSUSAN is fast enough to be applied to CeleX-V, the
event camera with the highest resolution available. Based on eSUSAN, we also
propose the SE-Harris corner detector, which uses adaptive normalization based
on exponential decay to quickly construct a local surface of active events and
the event-based Harris detector to refine the corners identified by eSUSAN. We
evaluated the proposed algorithms on a public dataset and CeleX-V data. Both
eSUSAN and SE-Harris exhibit higher real-time performance than existing
algorithms while maintaining high accuracy and tracking performance.
Related papers
- EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - Dual Memory Aggregation Network for Event-Based Object Detection with
Learnable Representation [79.02808071245634]
Event-based cameras are bio-inspired sensors that capture brightness change of every pixel in an asynchronous manner.
Event streams are divided into grids in the x-y-t coordinates for both positive and negative polarity, producing a set of pillars as 3D tensor representation.
Long memory is encoded in the hidden state of adaptive convLSTMs while short memory is modeled by computing spatial-temporal correlation between event pillars.
arXiv Detail & Related papers (2023-03-17T12:12:41Z) - How Many Events do You Need? Event-based Visual Place Recognition Using
Sparse But Varying Pixels [29.6328152991222]
One of the potential applications of event camera research lies in visual place recognition for robot localization.
We show that the absolute difference in the number of events at those pixel locations accumulated into event frames can be sufficient for the place recognition task.
We evaluate our proposed approach on the Brisbane-Event-VPR dataset in an outdoor driving scenario, as well as the newly contributed indoor QCR-Event-VPR dataset.
arXiv Detail & Related papers (2022-06-28T00:24:12Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - luvHarris: A Practical Corner Detector for Event-cameras [3.5097082077065]
Event-driven computer vision has become more accessible.
Current state-of-the-art have either unsatisfactory accuracy or real-time performance when considered for practical use.
We present yet another method to perform corner detection, dubbed look-up event-Harris (luvHarris)
arXiv Detail & Related papers (2021-05-24T17:54:06Z) - Asynchronous Corner Tracking Algorithm based on Lifetime of Events for
DAVIS Cameras [0.9988653233188148]
Event cameras, i.e., the Dynamic and Active-pixel Vision Sensor (DAVIS) ones, capture the intensity changes in the scene and generates a stream of events in an asynchronous fashion.
The output rate of such cameras can reach up to 10 million events per second in high dynamic environments.
A novel asynchronous corner tracking method is proposed that uses both events and intensity images captured by a DAVIS camera.
arXiv Detail & Related papers (2020-10-29T12:02:40Z) - Dynamic Resource-aware Corner Detection for Bio-inspired Vision Sensors [0.9988653233188148]
We present an algorithm to detect asynchronous corners from a stream of events in real-time on embedded systems.
The proposed algorithm is capable of selecting the best corner candidate among neighbors and achieves an average execution time savings of 59 %.
arXiv Detail & Related papers (2020-10-29T12:01:33Z) - Learning Monocular Dense Depth from Events [53.078665310545745]
Event cameras produce brightness changes in the form of a stream of asynchronous events instead of intensity frames.
Recent learning-based approaches have been applied to event-based data, such as monocular depth prediction.
We propose a recurrent architecture to solve this task and show significant improvement over standard feed-forward methods.
arXiv Detail & Related papers (2020-10-16T12:36:23Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z) - EBBINNOT: A Hardware Efficient Hybrid Event-Frame Tracker for Stationary
Dynamic Vision Sensors [5.674895233111088]
This paper presents a hybrid event-frame approach for detecting and tracking objects recorded by a stationary neuromorphic sensor.
To exploit the background removal property of a static DVS, we propose an event-based binary image creation that signals presence or absence of events in a frame duration.
This is the first time a stationary DVS based traffic monitoring solution is extensively compared to simultaneously recorded RGB frame-based methods.
arXiv Detail & Related papers (2020-05-31T03:01:35Z) - Event-based Asynchronous Sparse Convolutional Networks [54.094244806123235]
Event cameras are bio-inspired sensors that respond to per-pixel brightness changes in the form of asynchronous and sparse "events"
We present a general framework for converting models trained on synchronous image-like event representations into asynchronous models with identical output.
We show both theoretically and experimentally that this drastically reduces the computational complexity and latency of high-capacity, synchronous neural networks.
arXiv Detail & Related papers (2020-03-20T08:39:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.