ANMS: Asynchronous Non-Maximum Suppression in Event Stream
- URL: http://arxiv.org/abs/2303.10575v1
- Date: Sun, 19 Mar 2023 05:33:32 GMT
- Title: ANMS: Asynchronous Non-Maximum Suppression in Event Stream
- Authors: Qianang Zhou, JunLin Xiong, Youfu Li
- Abstract summary: Non-maximum suppression (NMS) is widely used in frame-based tasks as an essential post-processing algorithm.
This paper proposes a general-purpose asynchronous non-maximum suppression pipeline (ANMS)
The proposed pipeline extract fine feature stream from the output of original detectors and adapts to the speed of motion.
- Score: 15.355579943905585
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The non-maximum suppression (NMS) is widely used in frame-based tasks as an
essential post-processing algorithm. However, event-based NMS either has high
computational complexity or leads to frequent discontinuities. As a result, the
performance of event-based corner detectors is limited. This paper proposes a
general-purpose asynchronous non-maximum suppression pipeline (ANMS), and
applies it to corner event detection. The proposed pipeline extract fine
feature stream from the output of original detectors and adapts to the speed of
motion. The ANMS runs directly on the asynchronous event stream with extremely
low latency, which hardly affects the speed of original detectors.
Additionally, we evaluate the DAVIS-based ground-truth labeling method to fill
the gap between frame and event. Evaluation on public dataset indicates that
the proposed ANMS pipeline significantly improves the performance of three
classical asynchronous detectors with negligible latency. More importantly, the
proposed ANMS framework is a natural extension of NMS, which is applicable to
other asynchronous scoring tasks for event cameras.
Related papers
- Fast Window-Based Event Denoising with Spatiotemporal Correlation
Enhancement [85.66867277156089]
We propose window-based event denoising, which simultaneously deals with a stack of events.
In spatial domain, we choose maximum a posteriori (MAP) to discriminate real-world event and noise.
Our algorithm can remove event noise effectively and efficiently and improve the performance of downstream tasks.
arXiv Detail & Related papers (2024-02-14T15:56:42Z) - SpikeMOT: Event-based Multi-Object Tracking with Sparse Motion Features [52.213656737672935]
SpikeMOT is an event-based multi-object tracker.
SpikeMOT uses spiking neural networks to extract sparsetemporal features from event streams associated with objects.
arXiv Detail & Related papers (2023-09-29T05:13:43Z) - AEGNN: Asynchronous Event-based Graph Neural Networks [54.528926463775946]
Event-based Graph Neural Networks generalize standard GNNs to process events as "evolving"-temporal graphs.
AEGNNs are easily trained on synchronous inputs and can be converted to efficient, "asynchronous" networks at test time.
arXiv Detail & Related papers (2022-03-31T16:21:12Z) - ProgressiveMotionSeg: Mutually Reinforced Framework for Event-Based
Motion Segmentation [101.19290845597918]
This paper presents a Motion Estimation (ME) module and an Event Denoising (ED) module jointly optimized in a mutually reinforced manner.
Taking temporal correlation as guidance, ED module calculates the confidence that each event belongs to real activity events, and transmits it to ME module to update energy function of motion segmentation for noise suppression.
arXiv Detail & Related papers (2022-03-22T13:40:26Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Lightweight Jet Reconstruction and Identification as an Object Detection
Task [5.071565475111431]
We apply convolutional techniques to end-to-end jet identification and reconstruction tasks encountered at the CERN Large Hadron Collider.
PFJet-SSD performs simultaneous localization, classification and regression tasks to cluster jets and reconstruct their features.
We show that the ternary network closely matches the performance of its full-precision equivalent and outperforms the state-of-the-art rule-based algorithm.
arXiv Detail & Related papers (2022-02-09T15:01:53Z) - End-to-End Object Detection with Fully Convolutional Network [71.56728221604158]
We introduce a Prediction-aware One-To-One (POTO) label assignment for classification to enable end-to-end detection.
A simple 3D Max Filtering (3DMF) is proposed to utilize the multi-scale features and improve the discriminability of convolutions in the local region.
Our end-to-end framework achieves competitive performance against many state-of-the-art detectors with NMS on COCO and CrowdHuman datasets.
arXiv Detail & Related papers (2020-12-07T09:14:55Z) - Faster object tracking pipeline for real time tracking [0.0]
Multi-object tracking (MOT) is a challenging practical problem for vision based applications.
This paper showcases a generic pipeline which can be used to speed up detection based object tracking methods.
arXiv Detail & Related papers (2020-11-08T06:33:48Z) - ASAP-NMS: Accelerating Non-Maximum Suppression Using Spatially Aware
Priors [26.835571059909007]
Non Maximum Suppression (or Greedy-NMS) is a crucial module for object-detection pipelines.
For the region proposal stage of two/multi-stage detectors, NMS is turning out to be a latency bottleneck due to its sequential nature.
We use ASAP-NMS to improve the latency of the NMS step from 13.6ms to 1.2 ms on a CPU without sacrificing the accuracy of a state-of-the-art two-stage detector.
arXiv Detail & Related papers (2020-07-19T21:15:48Z) - EBBINNOT: A Hardware Efficient Hybrid Event-Frame Tracker for Stationary
Dynamic Vision Sensors [5.674895233111088]
This paper presents a hybrid event-frame approach for detecting and tracking objects recorded by a stationary neuromorphic sensor.
To exploit the background removal property of a static DVS, we propose an event-based binary image creation that signals presence or absence of events in a frame duration.
This is the first time a stationary DVS based traffic monitoring solution is extensively compared to simultaneously recorded RGB frame-based methods.
arXiv Detail & Related papers (2020-05-31T03:01:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.