Adaptive Bounding Box Uncertainties via Two-Step Conformal Prediction
- URL: http://arxiv.org/abs/2403.07263v2
- Date: Tue, 30 Jul 2024 11:31:31 GMT
- Title: Adaptive Bounding Box Uncertainties via Two-Step Conformal Prediction
- Authors: Alexander Timans, Christoph-Nikolas Straehle, Kaspar Sakmann, Eric Nalisnick,
- Abstract summary: We leverage conformal prediction to obtain uncertainty intervals with guaranteed coverage for object bounding boxes.
One challenge in doing so is that bounding box predictions are conditioned on the object's class label.
We develop a novel two-step conformal approach that propagates uncertainty in predicted class labels into the uncertainty intervals of bounding boxes.
- Score: 44.83236260638115
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Quantifying a model's predictive uncertainty is essential for safety-critical applications such as autonomous driving. We consider quantifying such uncertainty for multi-object detection. In particular, we leverage conformal prediction to obtain uncertainty intervals with guaranteed coverage for object bounding boxes. One challenge in doing so is that bounding box predictions are conditioned on the object's class label. Thus, we develop a novel two-step conformal approach that propagates uncertainty in predicted class labels into the uncertainty intervals of bounding boxes. This broadens the validity of our conformal coverage guarantees to include incorrectly classified objects, thus offering more actionable safety assurances. Moreover, we investigate novel ensemble and quantile regression formulations to ensure the bounding box intervals are adaptive to object size, leading to a more balanced coverage. Validating our two-step approach on real-world datasets for 2D bounding box localization, we find that desired coverage levels are satisfied with practically tight predictive uncertainty intervals.
Related papers
- SConU: Selective Conformal Uncertainty in Large Language Models [59.25881667640868]
We propose a novel approach termed Selective Conformal Uncertainty (SConU)
We develop two conformal p-values that are instrumental in determining whether a given sample deviates from the uncertainty distribution of the calibration set at a specific manageable risk level.
Our approach not only facilitates rigorous management of miscoverage rates across both single-domain and interdisciplinary contexts, but also enhances the efficiency of predictions.
arXiv Detail & Related papers (2025-04-19T03:01:45Z) - Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Adaptive Conformal Inference by Particle Filtering under Hidden Markov Models [8.505262415500168]
This paper proposes an adaptive conformal inference framework that leverages a particle filtering approach to address this issue.
Rather than directly focusing on the unobservable hidden state, we innovatively use weighted particles as an approximation of the actual posterior distribution of the hidden state.
arXiv Detail & Related papers (2024-11-03T13:15:32Z) - Integrating uncertainty quantification into randomized smoothing based robustness guarantees [18.572496359670797]
Deep neural networks are vulnerable to adversarial attacks which can cause hazardous incorrect predictions in safety-critical applications.
Certified robustness via randomized smoothing gives a probabilistic guarantee that the smoothed classifier's predictions will not change within an $ell$-ball around a given input.
Uncertainty-based rejection is a technique often applied in practice to defend models against adversarial attacks.
We demonstrate, that the novel framework allows for a systematic evaluation of different network architectures and uncertainty measures.
arXiv Detail & Related papers (2024-10-27T13:07:43Z) - Calibrated Perception Uncertainty Across Objects and Regions in
Bird's-Eye-View [2.3943776585660976]
We highlight limitations in the state-of-the-art and propose a more complete set of uncertainties to be reported.
We demonstrate that the obtained probabilities are not calibrated out-of-the-box and propose methods to achieve well-calibrated uncertainties.
arXiv Detail & Related papers (2022-11-08T16:01:17Z) - Approximate Conditional Coverage via Neural Model Approximations [0.030458514384586396]
We analyze a data-driven procedure for obtaining empirically reliable approximate conditional coverage.
We demonstrate the potential for substantial (and otherwise unknowable) under-coverage with split-conformal alternatives with marginal coverage guarantees.
arXiv Detail & Related papers (2022-05-28T02:59:05Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Uncertainty-Aware Model Adaptation for Unsupervised Cross-Domain Object
Detection [12.807987076435928]
This work tackles the unsupervised cross-domain object detection problem.
It aims to generalize a pre-trained object detector to a new target domain without labels.
arXiv Detail & Related papers (2021-08-28T09:37:18Z) - Distribution-free uncertainty quantification for classification under
label shift [105.27463615756733]
We focus on uncertainty quantification (UQ) for classification problems via two avenues.
We first argue that label shift hurts UQ, by showing degradation in coverage and calibration.
We examine these techniques theoretically in a distribution-free framework and demonstrate their excellent practical performance.
arXiv Detail & Related papers (2021-03-04T20:51:03Z) - Localization Uncertainty Estimation for Anchor-Free Object Detection [48.931731695431374]
There are several limitations of the existing uncertainty estimation methods for anchor-based object detection.
We propose a new localization uncertainty estimation method called UAD for anchor-free object detection.
Our method captures the uncertainty in four directions of box offsets that are homogeneous, so that it can tell which direction is uncertain.
arXiv Detail & Related papers (2020-06-28T13:49:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.