Score refinement for confidence-based 3D multi-object tracking
- URL: http://arxiv.org/abs/2107.04327v1
- Date: Fri, 9 Jul 2021 09:40:07 GMT
- Title: Score refinement for confidence-based 3D multi-object tracking
- Authors: Nuri Benbarka, Jona Schr\"oder, Andreas Zell
- Abstract summary: We show that manipulating the scores depending on time consistency while terminating the tracklets depending on the tracklet score improves tracking results.
Compared to count-based methods, our method consistently produces better AMOTA and MOTA scores.
It achieved an AMOTA score of 67.6 on nuScenes test evaluation, which is comparable to other state-of-the-art trackers.
- Score: 14.853897011640022
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Multi-object tracking is a critical component in autonomous navigation, as it
provides valuable information for decision-making. Many researchers tackled the
3D multi-object tracking task by filtering out the frame-by-frame 3D
detections; however, their focus was mainly on finding useful features or
proper matching metrics. Our work focuses on a neglected part of the tracking
system: score refinement and tracklet termination. We show that manipulating
the scores depending on time consistency while terminating the tracklets
depending on the tracklet score improves tracking results. We do this by
increasing the matched tracklets' score with score update functions and
decreasing the unmatched tracklets' score. Compared to count-based methods, our
method consistently produces better AMOTA and MOTA scores when utilizing
various detectors and filtering algorithms on different datasets. The
improvements in AMOTA score went up to 1.83 and 2.96 in MOTA. We also used our
method as a late-fusion ensembling method, and it performed better than
voting-based ensemble methods by a solid margin. It achieved an AMOTA score of
67.6 on nuScenes test evaluation, which is comparable to other state-of-the-art
trackers. Code is publicly available at:
\url{https://github.com/cogsys-tuebingen/CBMOT}.
Related papers
- ByteTrackV2: 2D and 3D Multi-Object Tracking by Associating Every
Detection Box [81.45219802386444]
Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects across video frames.
We propose a hierarchical data association strategy to mine the true objects in low-score detection boxes.
In 3D scenarios, it is much easier for the tracker to predict object velocities in the world coordinate.
arXiv Detail & Related papers (2023-03-27T15:35:21Z) - Detection-aware multi-object tracking evaluation [1.7880586070278561]
We propose a novel performance measure, named Tracking Effort Measure (TEM), to evaluate trackers that use different detectors.
TEM can quantify the effort done by the tracker with a reduced correlation on the input detections.
arXiv Detail & Related papers (2022-12-16T15:35:34Z) - CAMO-MOT: Combined Appearance-Motion Optimization for 3D Multi-Object
Tracking with Camera-LiDAR Fusion [34.42289908350286]
3D Multi-object tracking (MOT) ensures consistency during continuous dynamic detection.
It can be challenging to accurately track the irregular motion of objects for LiDAR-based methods.
We propose a novel camera-LiDAR fusion 3D MOT framework based on the Combined Appearance-Motion Optimization (CAMO-MOT)
arXiv Detail & Related papers (2022-09-06T14:41:38Z) - Tracking Every Thing in the Wild [61.917043381836656]
We introduce a new metric, Track Every Thing Accuracy (TETA), breaking tracking measurement into three sub-factors: localization, association, and classification.
Our experiments show that TETA evaluates trackers more comprehensively, and TETer achieves significant improvements on the challenging large-scale datasets BDD100K and TAO.
arXiv Detail & Related papers (2022-07-26T15:37:19Z) - SimpleTrack: Understanding and Rethinking 3D Multi-object Tracking [17.351635242415703]
3D multi-object tracking (MOT) has witnessed numerous novel benchmarks and approaches in recent years.
Despite their progress and usefulness, an in-depth analysis of their strengths and weaknesses is not yet available.
We summarize current 3D MOT methods into a unified framework by decomposing them into four constituent parts.
arXiv Detail & Related papers (2021-11-18T10:57:57Z) - ByteTrack: Multi-Object Tracking by Associating Every Detection Box [51.93588012109943]
Multi-object tracking (MOT) aims at estimating bounding boxes and identities of objects in videos.
Most methods obtain identities by associating detection boxes whose scores are higher than a threshold.
We present a simple, effective and generic association method, called BYTE, tracking BY associaTing every detection box instead of only the high score ones.
arXiv Detail & Related papers (2021-10-13T17:01:26Z) - Tracking-by-Counting: Using Network Flows on Crowd Density Maps for
Tracking Multiple Targets [96.98888948518815]
State-of-the-art multi-object tracking(MOT) methods follow the tracking-by-detection paradigm.
We propose a new MOT paradigm, tracking-by-counting, tailored for crowded scenes.
arXiv Detail & Related papers (2020-07-18T19:51:53Z) - Quasi-Dense Similarity Learning for Multiple Object Tracking [82.93471035675299]
We present Quasi-Dense Similarity Learning, which densely samples hundreds of region proposals on a pair of images for contrastive learning.
We can directly combine this similarity learning with existing detection methods to build Quasi-Dense Tracking (QDTrack)
arXiv Detail & Related papers (2020-06-11T17:57:12Z) - Tracking Objects as Points [83.9217787335878]
We present a simultaneous detection and tracking algorithm that is simpler, faster, and more accurate than the state of the art.
Our tracker, CenterTrack, applies a detection model to a pair of images and detections from the prior frame.
CenterTrack is simple, online (no peeking into the future), and real-time.
arXiv Detail & Related papers (2020-04-02T17:58:40Z) - Probabilistic 3D Multi-Object Tracking for Autonomous Driving [23.036619327925088]
We present our on-line tracking method, which made the first place in the NuScenes Tracking Challenge.
Our method estimates the object states by adopting a Kalman Filter.
Our experimental results on the NuScenes validation and test set show that our method outperforms the AB3DMOT baseline method.
arXiv Detail & Related papers (2020-01-16T06:38:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.