Motion Robust High-Speed Light-Weighted Object Detection With Event
Camera
- URL: http://arxiv.org/abs/2208.11602v2
- Date: Mon, 26 Jun 2023 01:18:16 GMT
- Title: Motion Robust High-Speed Light-Weighted Object Detection With Event
Camera
- Authors: Bingde Liu, Chang Xu, Wen Yang, Huai Yu, Lei Yu
- Abstract summary: We propose a motion robust and high-speed detection pipeline which better leverages the event data.
Experiments on two typical real-scene event camera object detection datasets show that our method is competitive in terms of accuracy, efficiency, and the number of parameters.
- Score: 24.192961837270172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this work, we propose a motion robust and high-speed detection pipeline
which better leverages the event data. First, we design an event stream
representation called temporal active focus (TAF), which efficiently utilizes
the spatial-temporal asynchronous event stream, constructing event tensors
robust to object motions. Then, we propose a module called the bifurcated
folding module (BFM), which encodes the rich temporal information in the TAF
tensor at the input layer of the detector. Following this, we design a
high-speed lightweight detector called agile event detector (AED) plus a simple
but effective data augmentation method, to enhance the detection accuracy and
reduce the model's parameter. Experiments on two typical real-scene event
camera object detection datasets show that our method is competitive in terms
of accuracy, efficiency, and the number of parameters. By classifying objects
into multiple motion levels based on the optical flow density metric, we
further illustrated the robustness of our method for objects with different
velocities relative to the camera. The codes and trained models are available
at https://github.com/HarmoniaLeo/FRLW-EvD .
Related papers
- Motion Segmentation for Neuromorphic Aerial Surveillance [42.04157319642197]
Event cameras offer superior temporal resolution, superior dynamic range, and minimal power requirements.
Unlike traditional frame-based sensors that capture redundant information at fixed intervals, event cameras asynchronously record pixel-level brightness changes.
We introduce a novel motion segmentation method that leverages self-supervised vision transformers on both event data and optical flow information.
arXiv Detail & Related papers (2024-05-24T04:36:13Z) - SpikeMOT: Event-based Multi-Object Tracking with Sparse Motion Features [52.213656737672935]
SpikeMOT is an event-based multi-object tracker.
SpikeMOT uses spiking neural networks to extract sparsetemporal features from event streams associated with objects.
arXiv Detail & Related papers (2023-09-29T05:13:43Z) - FOLT: Fast Multiple Object Tracking from UAV-captured Videos Based on
Optical Flow [27.621524657473945]
Multiple object tracking (MOT) has been successfully investigated in computer vision.
However, MOT for the videos captured by unmanned aerial vehicles (UAV) is still challenging due to small object size, blurred object appearance, and very large and/or irregular motion.
We propose FOLT to mitigate these problems and reach fast and accurate MOT in UAV view.
arXiv Detail & Related papers (2023-08-14T15:24:44Z) - Dual Memory Aggregation Network for Event-Based Object Detection with
Learnable Representation [79.02808071245634]
Event-based cameras are bio-inspired sensors that capture brightness change of every pixel in an asynchronous manner.
Event streams are divided into grids in the x-y-t coordinates for both positive and negative polarity, producing a set of pillars as 3D tensor representation.
Long memory is encoded in the hidden state of adaptive convLSTMs while short memory is modeled by computing spatial-temporal correlation between event pillars.
arXiv Detail & Related papers (2023-03-17T12:12:41Z) - Motion-aware Memory Network for Fast Video Salient Object Detection [15.967509480432266]
We design a space-time memory (STM)-based network, which extracts useful temporal information of the current frame from adjacent frames as the temporal branch of VSOD.
In the encoding stage, we generate high-level temporal features by using high-level features from the current and its adjacent frames.
In the decoding stage, we propose an effective fusion strategy for spatial and temporal branches.
The proposed model does not require optical flow or other preprocessing, and can reach a speed of nearly 100 FPS during inference.
arXiv Detail & Related papers (2022-08-01T15:56:19Z) - StreamYOLO: Real-time Object Detection for Streaming Perception [84.2559631820007]
We endow the models with the capacity of predicting the future, significantly improving the results for streaming perception.
We consider multiple velocities driving scene and propose Velocity-awared streaming AP (VsAP) to jointly evaluate the accuracy.
Our simple method achieves the state-of-the-art performance on Argoverse-HD dataset and improves the sAP and VsAP by 4.7% and 8.2% respectively.
arXiv Detail & Related papers (2022-07-21T12:03:02Z) - Implicit Motion Handling for Video Camouflaged Object Detection [60.98467179649398]
We propose a new video camouflaged object detection (VCOD) framework.
It can exploit both short-term and long-term temporal consistency to detect camouflaged objects from video frames.
arXiv Detail & Related papers (2022-03-14T17:55:41Z) - VisEvent: Reliable Object Tracking via Collaboration of Frame and Event
Flows [93.54888104118822]
We propose a large-scale Visible-Event benchmark (termed VisEvent) due to the lack of a realistic and scaled dataset for this task.
Our dataset consists of 820 video pairs captured under low illumination, high speed, and background clutter scenarios.
Based on VisEvent, we transform the event flows into event images and construct more than 30 baseline methods.
arXiv Detail & Related papers (2021-08-11T03:55:12Z) - Tracking 6-DoF Object Motion from Events and Frames [0.0]
We propose a novel approach for 6 degree-of-freedom (6-DoF)object motion tracking that combines measurements of eventand frame-based cameras.
arXiv Detail & Related papers (2021-03-29T12:39:38Z) - Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for
Event-based Object Tracking [87.0297771292994]
We propose an Event-based Tracking-by-Detection (ETD) method for generic bounding box-based object tracking.
To achieve this goal, we present an Adaptive Time-Surface with Linear Time Decay (ATSLTD) event-to-frame conversion algorithm.
We compare the proposed ETD method with seven popular object tracking methods, that are based on conventional cameras or event cameras, and two variants of ETD.
arXiv Detail & Related papers (2020-02-13T15:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.