End-to-end Learning of Object Motion Estimation from Retinal Events for
Event-based Object Tracking
- URL: http://arxiv.org/abs/2002.05911v1
- Date: Fri, 14 Feb 2020 08:19:50 GMT
- Title: End-to-end Learning of Object Motion Estimation from Retinal Events for
Event-based Object Tracking
- Authors: Haosheng Chen, David Suter, Qiangqiang Wu, Hanzi Wang
- Abstract summary: We propose a novel deep neural network to learn and regress a parametric object-level motion/transform model for event-based object tracking.
To achieve this goal, we propose a synchronous Time-Surface with Linear Time Decay representation.
We feed the sequence of TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) perform to an end-to-end 5-DoF object motion regression.
- Score: 35.95703377642108
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Event cameras, which are asynchronous bio-inspired vision sensors, have shown
great potential in computer vision and artificial intelligence. However, the
application of event cameras to object-level motion estimation or tracking is
still in its infancy. The main idea behind this work is to propose a novel deep
neural network to learn and regress a parametric object-level motion/transform
model for event-based object tracking. To achieve this goal, we propose a
synchronous Time-Surface with Linear Time Decay (TSLTD) representation, which
effectively encodes the spatio-temporal information of asynchronous retinal
events into TSLTD frames with clear motion patterns. We feed the sequence of
TSLTD frames to a novel Retinal Motion Regression Network (RMRNet) to perform
an end-to-end 5-DoF object motion regression. Our method is compared with
state-of-the-art object tracking methods, that are based on conventional
cameras or event cameras. The experimental results show the superiority of our
method in handling various challenging environments such as fast motion and low
illumination conditions.
Related papers
- Event-based Structure-from-Orbit [23.97673114572094]
Certain applications in robotics and vision-based navigation require 3D perception of an object undergoing circular or spinning motion in front of a static camera.
We propose event-based structure-from-orbit (eSf), where the aim is to reconstruct the 3D structure of a fast spinning object observed from a static event camera.
arXiv Detail & Related papers (2024-05-10T03:02:03Z) - Distractor-aware Event-based Tracking [45.07711356111249]
We propose a distractor-aware event-based tracker that introduces transformer modules into Siamese network architecture (named DANet)
Our model is mainly composed of a motion-aware network and a target-aware network, which simultaneously exploits both motion cues and object contours from event data.
Our DANet can be trained in an end-to-end manner without any post-processing and can run at over 80 FPS on a single V100.
arXiv Detail & Related papers (2023-10-22T05:50:20Z) - SpikeMOT: Event-based Multi-Object Tracking with Sparse Motion Features [52.213656737672935]
SpikeMOT is an event-based multi-object tracker.
SpikeMOT uses spiking neural networks to extract sparsetemporal features from event streams associated with objects.
arXiv Detail & Related papers (2023-09-29T05:13:43Z) - MotionTrack: Learning Motion Predictor for Multiple Object Tracking [68.68339102749358]
We introduce a novel motion-based tracker, MotionTrack, centered around a learnable motion predictor.
Our experimental results demonstrate that MotionTrack yields state-of-the-art performance on datasets such as Dancetrack and SportsMOT.
arXiv Detail & Related papers (2023-06-05T04:24:11Z) - ParticleSfM: Exploiting Dense Point Trajectories for Localizing Moving
Cameras in the Wild [57.37891682117178]
We present a robust dense indirect structure-from-motion method for videos that is based on dense correspondence from pairwise optical flow.
A novel neural network architecture is proposed for processing irregular point trajectory data.
Experiments on MPI Sintel dataset show that our system produces significantly more accurate camera trajectories.
arXiv Detail & Related papers (2022-07-19T09:19:45Z) - Asynchronous Optimisation for Event-based Visual Odometry [53.59879499700895]
Event cameras open up new possibilities for robotic perception due to their low latency and high dynamic range.
We focus on event-based visual odometry (VO)
We propose an asynchronous structure-from-motion optimisation back-end.
arXiv Detail & Related papers (2022-03-02T11:28:47Z) - Tracking 6-DoF Object Motion from Events and Frames [0.0]
We propose a novel approach for 6 degree-of-freedom (6-DoF)object motion tracking that combines measurements of eventand frame-based cameras.
arXiv Detail & Related papers (2021-03-29T12:39:38Z) - Event-based Motion Segmentation with Spatio-Temporal Graph Cuts [51.17064599766138]
We have developed a method to identify independently objects acquired with an event-based camera.
The method performs on par or better than the state of the art without having to predetermine the number of expected moving objects.
arXiv Detail & Related papers (2020-12-16T04:06:02Z) - Asynchronous Tracking-by-Detection on Adaptive Time Surfaces for
Event-based Object Tracking [87.0297771292994]
We propose an Event-based Tracking-by-Detection (ETD) method for generic bounding box-based object tracking.
To achieve this goal, we present an Adaptive Time-Surface with Linear Time Decay (ATSLTD) event-to-frame conversion algorithm.
We compare the proposed ETD method with seven popular object tracking methods, that are based on conventional cameras or event cameras, and two variants of ETD.
arXiv Detail & Related papers (2020-02-13T15:58:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.