Probabilistic 3D Multi-Object Tracking for Autonomous Driving
- URL: http://arxiv.org/abs/2001.05673v1
- Date: Thu, 16 Jan 2020 06:38:02 GMT
- Title: Probabilistic 3D Multi-Object Tracking for Autonomous Driving
- Authors: Hsu-kuang Chiu, Antonio Prioletti, Jie Li, Jeannette Bohg
- Abstract summary: We present our on-line tracking method, which made the first place in the NuScenes Tracking Challenge.
Our method estimates the object states by adopting a Kalman Filter.
Our experimental results on the NuScenes validation and test set show that our method outperforms the AB3DMOT baseline method.
- Score: 23.036619327925088
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: 3D multi-object tracking is a key module in autonomous driving applications
that provides a reliable dynamic representation of the world to the planning
module. In this paper, we present our on-line tracking method, which made the
first place in the NuScenes Tracking Challenge, held at the AI Driving Olympics
Workshop at NeurIPS 2019. Our method estimates the object states by adopting a
Kalman Filter. We initialize the state covariance as well as the process and
observation noise covariance with statistics from the training set. We also use
the stochastic information from the Kalman Filter in the data association step
by measuring the Mahalanobis distance between the predicted object states and
current object detections. Our experimental results on the NuScenes validation
and test set show that our method outperforms the AB3DMOT baseline method by a
large margin in the Average Multi-Object Tracking Accuracy (AMOTA) metric.
Related papers
- MCTrack: A Unified 3D Multi-Object Tracking Framework for Autonomous Driving [10.399817864597347]
This paper introduces MCTrack, a new 3D multi-object tracking method that achieves state-of-the-art (SOTA) performance across KITTI, nuScenes, and datasets.
arXiv Detail & Related papers (2024-09-23T11:26:01Z) - Unsupervised Domain Adaptation for Self-Driving from Past Traversal
Features [69.47588461101925]
We propose a method to adapt 3D object detectors to new driving environments.
Our approach enhances LiDAR-based detection models using spatial quantized historical features.
Experiments on real-world datasets demonstrate significant improvements.
arXiv Detail & Related papers (2023-09-21T15:00:31Z) - Once Detected, Never Lost: Surpassing Human Performance in Offline LiDAR
based 3D Object Detection [50.959453059206446]
This paper aims for high-performance offline LiDAR-based 3D object detection.
We first observe that experienced human annotators annotate objects from a track-centric perspective.
We propose a high-performance offline detector in a track-centric perspective instead of the conventional object-centric perspective.
arXiv Detail & Related papers (2023-04-24T17:59:05Z) - 3DMODT: Attention-Guided Affinities for Joint Detection & Tracking in 3D
Point Clouds [95.54285993019843]
We propose a method for joint detection and tracking of multiple objects in 3D point clouds.
Our model exploits temporal information employing multiple frames to detect objects and track them in a single network.
arXiv Detail & Related papers (2022-11-01T20:59:38Z) - Transforming Model Prediction for Tracking [109.08417327309937]
Transformers capture global relations with little inductive bias, allowing it to learn the prediction of more powerful target models.
We train the proposed tracker end-to-end and validate its performance by conducting comprehensive experiments on multiple tracking datasets.
Our tracker sets a new state of the art on three benchmarks, achieving an AUC of 68.5% on the challenging LaSOT dataset.
arXiv Detail & Related papers (2022-03-21T17:59:40Z) - FAST3D: Flow-Aware Self-Training for 3D Object Detectors [12.511087244102036]
State-of-the-art self-training approaches mostly ignore the temporal nature of autonomous driving data.
We propose a flow-aware self-training method that enables unsupervised domain adaptation for 3D object detectors on continuous LiDAR point clouds.
Our results show a significant improvement over the state-of-the-art, without any prior target domain knowledge.
arXiv Detail & Related papers (2021-10-18T14:32:05Z) - Exploring Simple 3D Multi-Object Tracking for Autonomous Driving [10.921208239968827]
3D multi-object tracking in LiDAR point clouds is a key ingredient for self-driving vehicles.
Existing methods are predominantly based on the tracking-by-detection pipeline and inevitably require a matching step for the detection association.
We present SimTrack to simplify the hand-crafted tracking paradigm by proposing an end-to-end trainable model for joint detection and tracking from raw point clouds.
arXiv Detail & Related papers (2021-08-23T17:59:22Z) - Learnable Online Graph Representations for 3D Multi-Object Tracking [156.58876381318402]
We propose a unified and learning based approach to the 3D MOT problem.
We employ a Neural Message Passing network for data association that is fully trainable.
We show the merit of the proposed approach on the publicly available nuScenes dataset by achieving state-of-the-art performance of 65.6% AMOTA and 58% fewer ID-switches.
arXiv Detail & Related papers (2021-04-23T17:59:28Z) - Monocular Quasi-Dense 3D Object Tracking [99.51683944057191]
A reliable and accurate 3D tracking framework is essential for predicting future locations of surrounding objects and planning the observer's actions in numerous applications such as autonomous driving.
We propose a framework that can effectively associate moving objects over time and estimate their full 3D bounding box information from a sequence of 2D images captured on a moving platform.
arXiv Detail & Related papers (2021-03-12T15:30:02Z) - Probabilistic 3D Multi-Modal, Multi-Object Tracking for Autonomous
Driving [22.693895321632507]
We propose a probabilistic, multi-modal, multi-object tracking system consisting of different trainable modules.
We show that our method outperforms current state-of-the-art on the NuScenes Tracking dataset.
arXiv Detail & Related papers (2020-12-26T15:00:54Z) - Tracking from Patterns: Learning Corresponding Patterns in Point Clouds
for 3D Object Tracking [34.40019455462043]
We propose to learn 3D object correspondences from temporal point cloud data and infer the motion information from correspondence patterns.
Our method exceeds the existing 3D tracking methods on both the KITTI and larger scale Nuscenes dataset.
arXiv Detail & Related papers (2020-10-20T06:07:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.