DFR-FastMOT: Detection Failure Resistant Tracker for Fast Multi-Object
Tracking Based on Sensor Fusion
- URL: http://arxiv.org/abs/2302.14807v1
- Date: Tue, 28 Feb 2023 17:57:06 GMT
- Title: DFR-FastMOT: Detection Failure Resistant Tracker for Fast Multi-Object
Tracking Based on Sensor Fusion
- Authors: Mohamed Nagy, Majid Khonji, Jorge Dias and Sajid Javed
- Abstract summary: Persistent multi-object tracking (MOT) allows autonomous vehicles to navigate safely in highly dynamic environments.
One of the well-known challenges in MOT is object occlusion when an object becomes unobservant for subsequent frames.
We propose DFR-FastMOT, a light MOT method that uses data from a camera and LiDAR sensors.
Our framework processes about 7,763 frames in 1.48 seconds, which is seven times faster than recent benchmarks.
- Score: 7.845528514468835
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Persistent multi-object tracking (MOT) allows autonomous vehicles to navigate
safely in highly dynamic environments. One of the well-known challenges in MOT
is object occlusion when an object becomes unobservant for subsequent frames.
The current MOT methods store objects information, like objects' trajectory, in
internal memory to recover the objects after occlusions. However, they retain
short-term memory to save computational time and avoid slowing down the MOT
method. As a result, they lose track of objects in some occlusion scenarios,
particularly long ones. In this paper, we propose DFR-FastMOT, a light MOT
method that uses data from a camera and LiDAR sensors and relies on an
algebraic formulation for object association and fusion. The formulation boosts
the computational time and permits long-term memory that tackles more occlusion
scenarios. Our method shows outstanding tracking performance over recent
learning and non-learning benchmarks with about 3% and 4% margin in MOTA,
respectively. Also, we conduct extensive experiments that simulate occlusion
phenomena by employing detectors with various distortion levels. The proposed
solution enables superior performance under various distortion levels in
detection over current state-of-art methods. Our framework processes about
7,763 frames in 1.48 seconds, which is seven times faster than recent
benchmarks. The framework will be available at
https://github.com/MohamedNagyMostafa/DFR-FastMOT.
Related papers
- TASeg: Temporal Aggregation Network for LiDAR Semantic Segmentation [80.13343299606146]
We propose a Temporal LiDAR Aggregation and Distillation (TLAD) algorithm, which leverages historical priors to assign different aggregation steps for different classes.
To make full use of temporal images, we design a Temporal Image Aggregation and Fusion (TIAF) module, which can greatly expand the camera FOV.
We also develop a Static-Moving Switch Augmentation (SMSA) algorithm, which utilizes sufficient temporal information to enable objects to switch their motion states freely.
arXiv Detail & Related papers (2024-07-13T03:00:16Z) - Ego-Motion Aware Target Prediction Module for Robust Multi-Object Tracking [2.7898966850590625]
We introduce a novel KF-based prediction module called Ego-motion Aware Target Prediction (EMAP)
Our proposed method decouples the impact of camera rotational and translational velocity from the object trajectories by reformulating the Kalman Filter.
EMAP remarkably drops the number of identity switches (IDSW) of OC-SORT and Deep OC-SORT by 73% and 21%, respectively.
arXiv Detail & Related papers (2024-04-03T23:24:25Z) - PTT: Point-Trajectory Transformer for Efficient Temporal 3D Object Detection [66.94819989912823]
We propose a point-trajectory transformer with long short-term memory for efficient temporal 3D object detection.
We use point clouds of current-frame objects and their historical trajectories as input to minimize the memory bank storage requirement.
We conduct extensive experiments on the large-scale dataset to demonstrate that our approach performs well against state-of-the-art methods.
arXiv Detail & Related papers (2023-12-13T18:59:13Z) - MeMOTR: Long-Term Memory-Augmented Transformer for Multi-Object Tracking [19.173503245000678]
We propose MeMOTR, a long-term memory-augmented Transformer for multi-object tracking.
MeMOTR impressively surpasses the state-of-the-art method by 7.9% and 13.0% on HOTA and AssA metrics.
Our model also outperforms other Transformer-based methods on association performance on MOT17 and generalizes well on BDD100K.
arXiv Detail & Related papers (2023-07-28T17:50:09Z) - TrajectoryFormer: 3D Object Tracking Transformer with Predictive
Trajectory Hypotheses [51.60422927416087]
3D multi-object tracking (MOT) is vital for many applications including autonomous driving vehicles and service robots.
We present TrajectoryFormer, a novel point-cloud-based 3D MOT framework.
arXiv Detail & Related papers (2023-06-09T13:31:50Z) - ByteTrackV2: 2D and 3D Multi-Object Tracking by Associating Every
Detection Box [81.45219802386444]
Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects across video frames.
We propose a hierarchical data association strategy to mine the true objects in low-score detection boxes.
In 3D scenarios, it is much easier for the tracker to predict object velocities in the world coordinate.
arXiv Detail & Related papers (2023-03-27T15:35:21Z) - CAMO-MOT: Combined Appearance-Motion Optimization for 3D Multi-Object
Tracking with Camera-LiDAR Fusion [34.42289908350286]
3D Multi-object tracking (MOT) ensures consistency during continuous dynamic detection.
It can be challenging to accurately track the irregular motion of objects for LiDAR-based methods.
We propose a novel camera-LiDAR fusion 3D MOT framework based on the Combined Appearance-Motion Optimization (CAMO-MOT)
arXiv Detail & Related papers (2022-09-06T14:41:38Z) - DeepFusionMOT: A 3D Multi-Object Tracking Framework Based on
Camera-LiDAR Fusion with Deep Association [8.34219107351442]
This paper proposes a robust camera-LiDAR fusion-based MOT method that achieves a good trade-off between accuracy and speed.
Our proposed method presents obvious advantages over the state-of-the-art MOT methods in terms of both tracking accuracy and processing speed.
arXiv Detail & Related papers (2022-02-24T13:36:29Z) - Distractor-Aware Fast Tracking via Dynamic Convolutions and MOT
Philosophy [63.91005999481061]
A practical long-term tracker typically contains three key properties, i.e. an efficient model design, an effective global re-detection strategy and a robust distractor awareness mechanism.
We propose a two-task tracking frame work (named DMTrack) to achieve distractor-aware fast tracking via Dynamic convolutions (d-convs) and Multiple object tracking (MOT) philosophy.
Our tracker achieves state-of-the-art performance on the LaSOT, OxUvA, TLP, VOT2018LT and VOT 2019LT benchmarks and runs in real-time (3x faster
arXiv Detail & Related papers (2021-04-25T00:59:53Z) - Simultaneous Detection and Tracking with Motion Modelling for Multiple
Object Tracking [94.24393546459424]
We introduce Deep Motion Modeling Network (DMM-Net) that can estimate multiple objects' motion parameters to perform joint detection and association.
DMM-Net achieves PR-MOTA score of 12.80 @ 120+ fps for the popular UA-DETRAC challenge, which is better performance and orders of magnitude faster.
We also contribute a synthetic large-scale public dataset Omni-MOT for vehicle tracking that provides precise ground-truth annotations.
arXiv Detail & Related papers (2020-08-20T08:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.