Collaborative Multi-Object Tracking with Conformal Uncertainty
Propagation
- URL: http://arxiv.org/abs/2303.14346v2
- Date: Wed, 31 Jan 2024 16:00:54 GMT
- Title: Collaborative Multi-Object Tracking with Conformal Uncertainty
Propagation
- Authors: Sanbao Su, Songyang Han, Yiming Li, Zhili Zhang, Chen Feng, Caiwen
Ding, Fei Miao
- Abstract summary: Collaborative object detection (COD) has been proposed to improve detection accuracy and reduce uncertainty.
We design an uncertainty propagation framework called MOT-CUP to enhance MOT performance.
Our framework first quantifies the uncertainty of COD through direct modeling and conformal prediction, and propagates this uncertainty into the motion prediction and association steps.
- Score: 30.47064353266713
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detection and multiple object tracking (MOT) are essential components
of self-driving systems. Accurate detection and uncertainty quantification are
both critical for onboard modules, such as perception, prediction, and
planning, to improve the safety and robustness of autonomous vehicles.
Collaborative object detection (COD) has been proposed to improve detection
accuracy and reduce uncertainty by leveraging the viewpoints of multiple
agents. However, little attention has been paid to how to leverage the
uncertainty quantification from COD to enhance MOT performance. In this paper,
as the first attempt to address this challenge, we design an uncertainty
propagation framework called MOT-CUP. Our framework first quantifies the
uncertainty of COD through direct modeling and conformal prediction, and
propagates this uncertainty information into the motion prediction and
association steps. MOT-CUP is designed to work with different collaborative
object detectors and baseline MOT algorithms. We evaluate MOT-CUP on V2X-Sim, a
comprehensive collaborative perception dataset, and demonstrate a 2%
improvement in accuracy and a 2.67X reduction in uncertainty compared to the
baselines, e.g. SORT and ByteTrack. In scenarios characterized by high
occlusion levels, our MOT-CUP demonstrates a noteworthy $4.01\%$ improvement in
accuracy. MOT-CUP demonstrates the importance of uncertainty quantification in
both COD and MOT, and provides the first attempt to improve the accuracy and
reduce the uncertainty in MOT based on COD through uncertainty propagation. Our
code is public on https://coperception.github.io/MOT-CUP/.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Uncertainty Estimation for 3D Object Detection via Evidential Learning [63.61283174146648]
We introduce a framework for quantifying uncertainty in 3D object detection by leveraging an evidential learning loss on Bird's Eye View representations in the 3D detector.
We demonstrate both the efficacy and importance of these uncertainty estimates on identifying out-of-distribution scenes, poorly localized objects, and missing (false negative) detections.
arXiv Detail & Related papers (2024-10-31T13:13:32Z) - UA-Track: Uncertainty-Aware End-to-End 3D Multi-Object Tracking [37.857915442467316]
3D multiple object tracking (MOT) plays a crucial role in autonomous driving perception.
Recent end-to-end query-based trackers simultaneously detect and track objects, which have shown promising potential for the 3D MOT task.
Existing methods overlook the uncertainty issue, which refers to the lack of precise confidence about the state and location of tracked objects.
We propose an Uncertainty-Aware 3D MOT framework, UA-Track, which tackles the uncertainty problem from multiple aspects.
arXiv Detail & Related papers (2024-06-04T09:34:46Z) - Ego-Motion Aware Target Prediction Module for Robust Multi-Object Tracking [2.7898966850590625]
We introduce a novel KF-based prediction module called Ego-motion Aware Target Prediction (EMAP)
Our proposed method decouples the impact of camera rotational and translational velocity from the object trajectories by reformulating the Kalman Filter.
EMAP remarkably drops the number of identity switches (IDSW) of OC-SORT and Deep OC-SORT by 73% and 21%, respectively.
arXiv Detail & Related papers (2024-04-03T23:24:25Z) - UncertaintyTrack: Exploiting Detection and Localization Uncertainty in Multi-Object Tracking [8.645078288584305]
Multi-object tracking (MOT) methods have seen a significant boost in performance recently.
We introduce UncertaintyTrack, a collection of extensions that can be applied to multiple TBD trackers.
Experiments on the Berkeley Deep Drive MOT dataset show that the combination of our method and informative uncertainty estimates reduces the number of ID switches by around 19%.
arXiv Detail & Related papers (2024-02-19T17:27:04Z) - DARTH: Holistic Test-time Adaptation for Multiple Object Tracking [87.72019733473562]
Multiple object tracking (MOT) is a fundamental component of perception systems for autonomous driving.
Despite the urge of safety in driving systems, no solution to the MOT adaptation problem to domain shift in test-time conditions has ever been proposed.
We introduce DARTH, a holistic test-time adaptation framework for MOT.
arXiv Detail & Related papers (2023-10-03T10:10:42Z) - Uncertainty-aware Unsupervised Multi-Object Tracking [33.53331700312752]
unsupervised multi-object trackers are inferior to learning reliable feature embeddings.
Recent self-supervised techniques are adopted, whereas they failed to capture temporal relations.
This paper argues that though the uncertainty problem is inevitable, it is possible to leverage the uncertainty itself to improve the learned consistency in turn.
arXiv Detail & Related papers (2023-07-28T09:03:06Z) - Uncertainty Quantification of Collaborative Detection for Self-Driving [12.590332512097698]
Sharing information between connected and autonomous vehicles (CAVs) improves the performance of collaborative object detection for self-driving.
However, CAVs still have uncertainties on object detection due to practical challenges.
Our work is the first to estimate the uncertainty of collaborative object detection.
arXiv Detail & Related papers (2022-09-16T20:30:45Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Uncertainty-Aware Deep Calibrated Salient Object Detection [74.58153220370527]
Existing deep neural network based salient object detection (SOD) methods mainly focus on pursuing high network accuracy.
These methods overlook the gap between network accuracy and prediction confidence, known as the confidence uncalibration problem.
We introduce an uncertaintyaware deep SOD network, and propose two strategies to prevent deep SOD networks from being overconfident.
arXiv Detail & Related papers (2020-12-10T23:28:36Z) - FairMOT: On the Fairness of Detection and Re-Identification in Multiple
Object Tracking [92.48078680697311]
Multi-object tracking (MOT) is an important problem in computer vision.
We present a simple yet effective approach termed as FairMOT based on the anchor-free object detection architecture CenterNet.
The approach achieves high accuracy for both detection and tracking.
arXiv Detail & Related papers (2020-04-04T08:18:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.