luvHarris: A Practical Corner Detector for Event-cameras
- URL: http://arxiv.org/abs/2105.11443v1
- Date: Mon, 24 May 2021 17:54:06 GMT
- Title: luvHarris: A Practical Corner Detector for Event-cameras
- Authors: Arren Glover, Aiko Dinale, Leandro De Souza Rosa, Simeon Bamford, and
Chiara Bartolozzi
- Abstract summary: Event-driven computer vision has become more accessible.
Current state-of-the-art have either unsatisfactory accuracy or real-time performance when considered for practical use.
We present yet another method to perform corner detection, dubbed look-up event-Harris (luvHarris)
- Score: 3.5097082077065
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: There have been a number of corner detection methods proposed for event
cameras in the last years, since event-driven computer vision has become more
accessible. Current state-of-the-art have either unsatisfactory accuracy or
real-time performance when considered for practical use; random motion using a
live camera in an unconstrained environment. In this paper, we present yet
another method to perform corner detection, dubbed look-up event-Harris
(luvHarris), that employs the Harris algorithm for high accuracy but manages an
improved event throughput. Our method has two major contributions, 1. a novel
"threshold ordinal event-surface" that removes certain tuning parameters and is
well suited for Harris operations, and 2. an implementation of the Harris
algorithm such that the computational load per-event is minimised and
computational heavy convolutions are performed only 'as-fast-as-possible', i.e.
only as computational resources are available. The result is a practical,
real-time, and robust corner detector that runs more than $2.6\times$ the speed
of current state-of-the-art; a necessity when using high-resolution
event-camera in real-time. We explain the considerations taken for the
approach, compare the algorithm to current state-of-the-art in terms of
computational performance and detection accuracy, and discuss the validity of
the proposed approach for event cameras.
Related papers
- Graph-based Asynchronous Event Processing for Rapid Object Recognition [59.112755601918074]
Event cameras capture asynchronous events stream in which each event encodes pixel location, trigger time, and the polarity of the brightness changes.
We introduce a novel graph-based framework for event cameras, namely SlideGCN.
Our approach can efficiently process data event-by-event, unlock the low latency nature of events data while still maintaining the graph's structure internally.
arXiv Detail & Related papers (2023-08-28T08:59:57Z) - EventTransAct: A video transformer-based framework for Event-camera
based action recognition [52.537021302246664]
Event cameras offer new opportunities compared to standard action recognition in RGB videos.
In this study, we employ a computationally efficient model, namely the video transformer network (VTN), which initially acquires spatial embeddings per event-frame.
In order to better adopt the VTN for the sparse and fine-grained nature of event data, we design Event-Contrastive Loss ($mathcalL_EC$) and event-specific augmentations.
arXiv Detail & Related papers (2023-08-25T23:51:07Z) - Event-based Simultaneous Localization and Mapping: A Comprehensive Survey [52.73728442921428]
Review of event-based vSLAM algorithms that exploit the benefits of asynchronous and irregular event streams for localization and mapping tasks.
Paper categorizes event-based vSLAM methods into four main categories: feature-based, direct, motion-compensation, and deep learning methods.
arXiv Detail & Related papers (2023-04-19T16:21:14Z) - ETAD: A Unified Framework for Efficient Temporal Action Detection [70.21104995731085]
Untrimmed video understanding such as temporal action detection (TAD) often suffers from the pain of huge demand for computing resources.
We build a unified framework for efficient end-to-end temporal action detection (ETAD)
ETAD achieves state-of-the-art performance on both THUMOS-14 and ActivityNet-1.3.
arXiv Detail & Related papers (2022-05-14T21:16:21Z) - Event Transformer. A sparse-aware solution for efficient event data
processing [9.669942356088377]
Event Transformer (EvT) is a framework that effectively takes advantage of event-data properties to be highly efficient and accurate.
EvT is evaluated on different event-based benchmarks for action and gesture recognition.
Results show better or comparable accuracy to the state-of-the-art while requiring significantly less computation resources.
arXiv Detail & Related papers (2022-04-07T10:49:17Z) - Sample and Computation Redistribution for Efficient Face Detection [137.19388513633484]
Training data sampling and computation distribution strategies are the keys to efficient and accurate face detection.
scrfdf34 outperforms the best competitor, TinaFace, by $3.86%$ (AP at hard set) while being more than emph3$times$ faster on GPUs with VGA-resolution images.
arXiv Detail & Related papers (2021-05-10T23:51:14Z) - SE-Harris and eSUSAN: Asynchronous Event-Based Corner Detection Using
Megapixel Resolution CeleX-V Camera [9.314068908300285]
Event cameras generate an asynchronous event stream of per-pixel intensity changes with precise timestamps.
We propose a corner detection algorithm, eSUSAN, inspired by the conventional SUSAN (smallest univalue segment assimilating nucleus) algorithm for corner detection.
We also propose the SE-Harris corner detector, which uses adaptive normalization based on exponential decay to quickly construct a local surface of active events.
arXiv Detail & Related papers (2021-05-02T14:06:28Z) - cMinMax: A Fast Algorithm to Find the Corners of an N-dimensional Convex
Polytope [4.157415305926584]
Corners are used in image registration andrecognition, tracking, SLAM, robot path finding and 2D or 3D object detection and retrieval.
The proposed algorithm is faster, approximately by a factor of 5 compared to the widely used Harris Corner Detection algorithm.
The algorithm can also be extended to N-dimensional polyhedrons.
arXiv Detail & Related papers (2020-11-28T00:32:11Z) - Dynamic Resource-aware Corner Detection for Bio-inspired Vision Sensors [0.9988653233188148]
We present an algorithm to detect asynchronous corners from a stream of events in real-time on embedded systems.
The proposed algorithm is capable of selecting the best corner candidate among neighbors and achieves an average execution time savings of 59 %.
arXiv Detail & Related papers (2020-10-29T12:01:33Z) - Unsupervised Feature Learning for Event Data: Direct vs Inverse Problem
Formulation [53.850686395708905]
Event-based cameras record an asynchronous stream of per-pixel brightness changes.
In this paper, we focus on single-layer architectures for representation learning from event data.
We show improvements of up to 9 % in the recognition accuracy compared to the state-of-the-art methods.
arXiv Detail & Related papers (2020-09-23T10:40:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.