Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for
Event-based Object Tracking
- URL: http://arxiv.org/abs/2002.05583v1
- Date: Thu, 13 Feb 2020 15:58:31 GMT
- Title: Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for
Event-based Object Tracking
- Authors: Haosheng Chen, Qiangqiang Wu, Yanjie Liang, Xinbo Gao, Hanzi Wang
- Abstract summary: We propose an Event-based Tracking-by-Detection (ETD) method for generic bounding box-based object tracking.
To achieve this goal, we present an Adaptive Time-Surface with Linear Time Decay (ATSLTD) event-to-frame conversion algorithm.
We compare the proposed ETD method with seven popular object tracking methods, that are based on conventional cameras or event cameras, and two variants of ETD.
- Score: 87.0297771292994
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras, which are asynchronous bio-inspired vision sensors, have shown
great potential in a variety of situations, such as fast motion and low
illumination scenes. However, most of the event-based object tracking methods
are designed for scenarios with untextured objects and uncluttered backgrounds.
There are few event-based object tracking methods that support bounding
box-based object tracking. The main idea behind this work is to propose an
asynchronous Event-based Tracking-by-Detection (ETD) method for generic
bounding box-based object tracking. To achieve this goal, we present an
Adaptive Time-Surface with Linear Time Decay (ATSLTD) event-to-frame conversion
algorithm, which asynchronously and effectively warps the spatio-temporal
information of asynchronous retinal events to a sequence of ATSLTD frames with
clear object contours. We feed the sequence of ATSLTD frames to the proposed
ETD method to perform accurate and efficient object tracking, which leverages
the high temporal resolution property of event cameras. We compare the proposed
ETD method with seven popular object tracking methods, that are based on
conventional cameras or event cameras, and two variants of ETD. The
experimental results show the superiority of the proposed ETD method in
handling various challenging environments.
Related papers
- DATAP-SfM: Dynamic-Aware Tracking Any Point for Robust Structure from Motion in the Wild [85.03973683867797]
This paper proposes a concise, elegant, and robust pipeline to estimate smooth camera trajectories and obtain dense point clouds for casual videos in the wild.
We show that the proposed method achieves state-of-the-art performance in terms of camera pose estimation even in complex dynamic challenge scenes.
arXiv Detail & Related papers (2024-11-20T13:01:16Z) - Line-based 6-DoF Object Pose Estimation and Tracking With an Event Camera [19.204896246140155]
Event cameras possess remarkable attributes such as high dynamic range, low latency, and resilience against motion blur.
We propose a line-based robust pose estimation and tracking method for planar or non-planar objects using an event camera.
arXiv Detail & Related papers (2024-08-06T14:36:43Z) - Solution for Point Tracking Task of ICCV 1st Perception Test Challenge 2023 [50.910598799408326]
The Tracking Any Point (TAP) task tracks any physical surface through a video.
Several existing approaches have explored the TAP by considering the temporal relationships to obtain smooth point motion trajectories.
We propose a simple yet effective approach called TAP with confident static points (TAPIR+), which focuses on rectifying the tracking of the static point in the videos shot by a static camera.
arXiv Detail & Related papers (2024-03-26T13:50:39Z) - SpikeMOT: Event-based Multi-Object Tracking with Sparse Motion Features [52.213656737672935]
SpikeMOT is an event-based multi-object tracker.
SpikeMOT uses spiking neural networks to extract sparsetemporal features from event streams associated with objects.
arXiv Detail & Related papers (2023-09-29T05:13:43Z) - Implicit Motion Handling for Video Camouflaged Object Detection [60.98467179649398]
We propose a new video camouflaged object detection (VCOD) framework.
It can exploit both short-term and long-term temporal consistency to detect camouflaged objects from video frames.
arXiv Detail & Related papers (2022-03-14T17:55:41Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - e-TLD: Event-based Framework for Dynamic Object Tracking [23.026432675020683]
This paper presents a long-term object tracking framework with a moving event camera under general tracking conditions.
The framework uses a discriminative representation for the object with online learning, and detects and re-tracks the object when it comes back into the field-of-view.
arXiv Detail & Related papers (2020-09-02T07:08:56Z) - A Hybrid Neuromorphic Object Tracking and Classification Framework for
Real-time Systems [5.959466944163293]
This paper proposes a real-time, hybrid neuromorphic framework for object tracking and classification using event-based cameras.
Unlike traditional approaches of using event-by-event processing, this work uses a mixed frame and event approach to get energy savings with high performance.
arXiv Detail & Related papers (2020-07-21T07:11:27Z) - End-to-end Learning of Object Motion Estimation from Retinal Events for
Event-based Object Tracking [35.95703377642108]
We propose a novel deep neural network to learn and regress a parametric object-level motion/transform model for event-based object tracking.
To achieve this goal, we propose a synchronous Time-Surface with Linear Time Decay representation.
We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) perform to an end-to-end 5-DoF object motion regression.
arXiv Detail & Related papers (2020-02-14T08:19:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.