Probabilistic 3D Multi-Object Cooperative Tracking for Autonomous
Driving via Differentiable Multi-Sensor Kalman Filter
- URL: http://arxiv.org/abs/2309.14655v2
- Date: Mon, 26 Feb 2024 18:04:44 GMT
- Title: Probabilistic 3D Multi-Object Cooperative Tracking for Autonomous
Driving via Differentiable Multi-Sensor Kalman Filter
- Authors: Hsu-kuang Chiu, Chien-Yi Wang, Min-Hung Chen, Stephen F. Smith
- Abstract summary: We propose a novel 3D multi-object cooperative tracking algorithm for autonomous driving via a differentiable multi-sensor Kalman Filter.
Our algorithm improves the tracking accuracy by 17% with only 0.037x communication costs compared with the state-of-the-art method in V2V4Real.
- Score: 11.081218144245506
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Current state-of-the-art autonomous driving vehicles mainly rely on each
individual sensor system to perform perception tasks. Such a framework's
reliability could be limited by occlusion or sensor failure. To address this
issue, more recent research proposes using vehicle-to-vehicle (V2V)
communication to share perception information with others. However, most
relevant works focus only on cooperative detection and leave cooperative
tracking an underexplored research field. A few recent datasets, such as
V2V4Real, provide 3D multi-object cooperative tracking benchmarks. However,
their proposed methods mainly use cooperative detection results as input to a
standard single-sensor Kalman Filter-based tracking algorithm. In their
approach, the measurement uncertainty of different sensors from different
connected autonomous vehicles (CAVs) may not be properly estimated to utilize
the theoretical optimality property of Kalman Filter-based tracking algorithms.
In this paper, we propose a novel 3D multi-object cooperative tracking
algorithm for autonomous driving via a differentiable multi-sensor Kalman
Filter. Our algorithm learns to estimate measurement uncertainty for each
detection that can better utilize the theoretical property of Kalman
Filter-based tracking methods. The experiment results show that our algorithm
improves the tracking accuracy by 17% with only 0.037x communication costs
compared with the state-of-the-art method in V2V4Real. Our code and videos are
available at https://github.com/eddyhkchiu/DMSTrack/ and
https://eddyhkchiu.github.io/dmstrack.github.io/ .
Related papers
- Learning 3D Perception from Others' Predictions [64.09115694891679]
We investigate a new scenario to construct 3D object detectors: learning from the predictions of a nearby unit that is equipped with an accurate detector.
For example, when a self-driving car enters a new area, it may learn from other traffic participants whose detectors have been optimized for that area.
arXiv Detail & Related papers (2024-10-03T16:31:28Z) - UnLoc: A Universal Localization Method for Autonomous Vehicles using
LiDAR, Radar and/or Camera Input [51.150605800173366]
UnLoc is a novel unified neural modeling approach for localization with multi-sensor input in all weather conditions.
Our method is extensively evaluated on Oxford Radar RobotCar, ApolloSouthBay and Perth-WA datasets.
arXiv Detail & Related papers (2023-07-03T04:10:55Z) - Multi-Modal 3D Object Detection by Box Matching [109.43430123791684]
We propose a novel Fusion network by Box Matching (FBMNet) for multi-modal 3D detection.
With the learned assignments between 3D and 2D object proposals, the fusion for detection can be effectively performed by combing their ROI features.
arXiv Detail & Related papers (2023-05-12T18:08:51Z) - Minkowski Tracker: A Sparse Spatio-Temporal R-CNN for Joint Object
Detection and Tracking [53.64390261936975]
We present Minkowski Tracker, a sparse-temporal R-CNN that jointly solves object detection and tracking problems.
Inspired by region-based CNN (R-CNN), we propose to track motion as a second stage of the object detector R-CNN.
We show in large-scale experiments that the overall performance gain of our method is due to four factors.
arXiv Detail & Related papers (2022-08-22T04:47:40Z) - Exploring Simple 3D Multi-Object Tracking for Autonomous Driving [10.921208239968827]
3D multi-object tracking in LiDAR point clouds is a key ingredient for self-driving vehicles.
Existing methods are predominantly based on the tracking-by-detection pipeline and inevitably require a matching step for the detection association.
We present SimTrack to simplify the hand-crafted tracking paradigm by proposing an end-to-end trainable model for joint detection and tracking from raw point clouds.
arXiv Detail & Related papers (2021-08-23T17:59:22Z) - CFTrack: Center-based Radar and Camera Fusion for 3D Multi-Object
Tracking [9.62721286522053]
We propose an end-to-end network for joint object detection and tracking based on radar and camera sensor fusion.
Our proposed method uses a center-based radar-camera fusion algorithm for object detection and utilizes a greedy algorithm for object association.
We evaluate our method on the challenging nuScenes dataset, where it achieves 20.0 AMOTA and outperforms all vision-based 3D tracking methods in the benchmark.
arXiv Detail & Related papers (2021-07-11T23:56:53Z) - CurbScan: Curb Detection and Tracking Using Multi-Sensor Fusion [0.8722958995761769]
Curb detection and tracking are useful in vehicle localization and path planning.
We propose an approach to detect and track curbs by fusing together data from multiple sensors.
Our algorithm maintains over 90% accuracy within 4.5-22 meters and 0-14 meters for the KITTI dataset and our dataset respectively.
arXiv Detail & Related papers (2020-10-09T22:48:20Z) - Towards Autonomous Driving: a Multi-Modal 360$^{\circ}$ Perception
Proposal [87.11988786121447]
This paper presents a framework for 3D object detection and tracking for autonomous vehicles.
The solution, based on a novel sensor fusion configuration, provides accurate and reliable road environment detection.
A variety of tests of the system, deployed in an autonomous vehicle, have successfully assessed the suitability of the proposed perception stack.
arXiv Detail & Related papers (2020-08-21T20:36:21Z) - Quasi-Dense Similarity Learning for Multiple Object Tracking [82.93471035675299]
We present Quasi-Dense Similarity Learning, which densely samples hundreds of region proposals on a pair of images for contrastive learning.
We can directly combine this similarity learning with existing detection methods to build Quasi-Dense Tracking (QDTrack)
arXiv Detail & Related papers (2020-06-11T17:57:12Z) - SDVTracker: Real-Time Multi-Sensor Association and Tracking for
Self-Driving Vehicles [11.317136648551537]
We present a practical and lightweight tracking system, SDVTracker, that uses a deep learned model for association and state estimation.
We show this system significantly outperforms hand-engineered methods on a real-world urban driving dataset while running in less than 2.5 ms on CPU for a scene with 100 actors.
arXiv Detail & Related papers (2020-03-09T23:07:23Z) - Probabilistic 3D Multi-Object Tracking for Autonomous Driving [23.036619327925088]
We present our on-line tracking method, which made the first place in the NuScenes Tracking Challenge.
Our method estimates the object states by adopting a Kalman Filter.
Our experimental results on the NuScenes validation and test set show that our method outperforms the AB3DMOT baseline method.
arXiv Detail & Related papers (2020-01-16T06:38:02Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.