Do We Still Need Non-Maximum Suppression? Accurate Confidence Estimates
and Implicit Duplication Modeling with IoU-Aware Calibration
- URL: http://arxiv.org/abs/2309.03110v1
- Date: Wed, 6 Sep 2023 15:47:33 GMT
- Title: Do We Still Need Non-Maximum Suppression? Accurate Confidence Estimates
and Implicit Duplication Modeling with IoU-Aware Calibration
- Authors: Johannes Gilg and Torben Teepe and Fabian Herzog and Philipp Wolters
and Gerhard Rigoll
- Abstract summary: We show that we can eliminate classic NMS-style post-processing by using IoU-aware calibration.
IoU-aware calibration is a conditional Beta calibration.
Our experiments show that the proposed IoU-aware calibration can successfully model duplicate detections.
- Score: 6.093524345727119
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Object detectors are at the heart of many semi- and fully autonomous decision
systems and are poised to become even more indispensable. They are, however,
still lacking in accessibility and can sometimes produce unreliable
predictions. Especially concerning in this regard are the -- essentially
hand-crafted -- non-maximum suppression algorithms that lead to an obfuscated
prediction process and biased confidence estimates. We show that we can
eliminate classic NMS-style post-processing by using IoU-aware calibration.
IoU-aware calibration is a conditional Beta calibration; this makes it
parallelizable with no hyper-parameters. Instead of arbitrary cutoffs or
discounts, it implicitly accounts for the likelihood of each detection being a
duplicate and adjusts the confidence score accordingly, resulting in
empirically based precision estimates for each detection. Our extensive
experiments on diverse detection architectures show that the proposed IoU-aware
calibration can successfully model duplicate detections and improve
calibration. Compared to the standard sequential NMS and calibration approach,
our joint modeling can deliver performance gains over the best NMS-based
alternative while producing consistently better-calibrated confidence
predictions with less complexity. The
\hyperlink{https://github.com/Blueblue4/IoU-AwareCalibration}{code} for all our
experiments is publicly available.
Related papers
- A Confidence Interval for the $\ell_2$ Expected Calibration Error [35.88784957918326]
We develop confidence intervals $ell$ Expected the Error (ECE)
We consider top-1-to-$k$ calibration, which includes both the popular notion of confidence calibration as well as calibration.
For a debiased estimator of the ECE, we show normality, but with different convergence rates and variances for calibrated and misd models.
arXiv Detail & Related papers (2024-08-16T20:00:08Z) - Calibrating Large Language Models with Sample Consistency [76.23956851098598]
We explore the potential of deriving confidence from the distribution of multiple randomly sampled model generations, via three measures of consistency.
Results show that consistency-based calibration methods outperform existing post-hoc approaches.
We offer practical guidance on choosing suitable consistency metrics for calibration, tailored to the characteristics of various LMs.
arXiv Detail & Related papers (2024-02-21T16:15:20Z) - Multiclass Alignment of Confidence and Certainty for Network Calibration [10.15706847741555]
Recent studies reveal that deep neural networks (DNNs) are prone to making overconfident predictions.
We propose a new train-time calibration method, which features a simple, plug-and-play auxiliary loss known as multi-class alignment of predictive mean confidence and predictive certainty (MACC)
Our method achieves state-of-the-art calibration performance for both in-domain and out-domain predictions.
arXiv Detail & Related papers (2023-09-06T00:56:24Z) - Two Sides of Miscalibration: Identifying Over and Under-Confidence
Prediction for Network Calibration [1.192436948211501]
Proper confidence calibration of deep neural networks is essential for reliable predictions in safety-critical tasks.
Miscalibration can lead to model over-confidence and/or under-confidence.
We introduce a novel metric, a miscalibration score, to identify the overall and class-wise calibration status.
We use the class-wise miscalibration score as a proxy to design a calibration technique that can tackle both over and under-confidence.
arXiv Detail & Related papers (2023-08-06T17:59:14Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Sharp Calibrated Gaussian Processes [58.94710279601622]
State-of-the-art approaches for designing calibrated models rely on inflating the Gaussian process posterior variance.
We present a calibration approach that generates predictive quantiles using a computation inspired by the vanilla Gaussian process posterior variance.
Our approach is shown to yield a calibrated model under reasonable assumptions.
arXiv Detail & Related papers (2023-02-23T12:17:36Z) - Sample-dependent Adaptive Temperature Scaling for Improved Calibration [95.7477042886242]
Post-hoc approach to compensate for neural networks being wrong is to perform temperature scaling.
We propose to predict a different temperature value for each input, allowing us to adjust the mismatch between confidence and accuracy.
We test our method on the ResNet50 and WideResNet28-10 architectures using the CIFAR10/100 and Tiny-ImageNet datasets.
arXiv Detail & Related papers (2022-07-13T14:13:49Z) - T-Cal: An optimal test for the calibration of predictive models [49.11538724574202]
We consider detecting mis-calibration of predictive models using a finite validation dataset as a hypothesis testing problem.
detecting mis-calibration is only possible when the conditional probabilities of the classes are sufficiently smooth functions of the predictions.
We propose T-Cal, a minimax test for calibration based on a de-biased plug-in estimator of the $ell$-Expected Error (ECE)
arXiv Detail & Related papers (2022-03-03T16:58:54Z) - MBCT: Tree-Based Feature-Aware Binning for Individual Uncertainty
Calibration [29.780204566046503]
We propose a feature-aware binning framework, called Multiple Boosting Trees (MBCT)
Our MBCT is non-monotonic, and has the potential to improve order accuracy, due to its learnable binning scheme and the individual calibration.
Results show that our method outperforms all competing models in terms of both calibration error and order accuracy.
arXiv Detail & Related papers (2022-02-09T08:59:16Z) - Bayesian Confidence Calibration for Epistemic Uncertainty Modelling [4.358626952482686]
We introduce a framework to obtain confidence estimates in conjunction with an uncertainty of the calibration method.
We achieve state-of-the-art calibration performance for object detection calibration.
arXiv Detail & Related papers (2021-09-21T10:53:16Z) - On the Dark Side of Calibration for Modern Neural Networks [65.83956184145477]
We show the breakdown of expected calibration error (ECE) into predicted confidence and refinement.
We highlight that regularisation based calibration only focuses on naively reducing a model's confidence.
We find that many calibration approaches with the likes of label smoothing, mixup etc. lower the utility of a DNN by degrading its refinement.
arXiv Detail & Related papers (2021-06-17T11:04:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.