Towards Improving Calibration in Object Detection Under Domain Shift
- URL: http://arxiv.org/abs/2209.07601v1
- Date: Thu, 15 Sep 2022 20:32:28 GMT
- Title: Towards Improving Calibration in Object Detection Under Domain Shift
- Authors: Muhammad Akhtar Munir, Muhammad Haris Khan, M. Saquib Sarfraz, Mohsen
Ali
- Abstract summary: We study the calibration of current object detection models, particularly under domain shift.
We introduce a plug-and-play train-time calibration loss for object detection.
Second, we devise a new uncertainty mechanism for object detection which can implicitly calibrate the commonly used self-training based domain adaptive detectors.
- Score: 9.828212203380133
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The increasing use of deep neural networks in safety-critical applications
requires the trained models to be well-calibrated. Most current calibration
techniques address classification problems while focusing on improving
calibration on in-domain predictions. Little to no attention is paid towards
addressing calibration of visual object detectors which occupy similar space
and importance in many decision making systems. In this paper, we study the
calibration of current object detection models, particularly under domain
shift. To this end, we first introduce a plug-and-play train-time calibration
loss for object detection. It can be used as an auxiliary loss function to
improve detector's calibration. Second, we devise a new uncertainty
quantification mechanism for object detection which can implicitly calibrate
the commonly used self-training based domain adaptive detectors. We include in
our study both single-stage and two-stage object detectors. We demonstrate that
our loss improves calibration for both in-domain and out-of-domain detections
with notable margins. Finally, we show the utility of our techniques in
calibrating the domain adaptive object detectors in diverse domain shift
scenarios.
Related papers
- Learning to Make Keypoints Sub-Pixel Accurate [80.55676599677824]
This work addresses the challenge of sub-pixel accuracy in detecting 2D local features.
We propose a novel network that enhances any detector with sub-pixel precision by learning an offset vector for detected features.
arXiv Detail & Related papers (2024-07-16T12:39:56Z) - Beyond Classification: Definition and Density-based Estimation of
Calibration in Object Detection [15.71719154574049]
We tackle the challenge of defining and estimating calibration error for deep neural networks (DNNs)
In particular, we adapt the definition of classification calibration error to handle the nuances associated with object detection.
We propose a consistent and differentiable estimator of the detection calibration error, utilizing kernel density estimation.
arXiv Detail & Related papers (2023-12-11T18:57:05Z) - Cal-DETR: Calibrated Detection Transformer [67.75361289429013]
We propose a mechanism for calibrated detection transformers (Cal-DETR), particularly for Deformable-DETR, UP-DETR and DINO.
We develop an uncertainty-guided logit modulation mechanism that leverages the uncertainty to modulate the class logits.
Results corroborate the effectiveness of Cal-DETR against the competing train-time methods in calibrating both in-domain and out-domain detections.
arXiv Detail & Related papers (2023-11-06T22:13:10Z) - Multiclass Confidence and Localization Calibration for Object Detection [4.119048608751183]
Deep neural networks (DNNs) tend to make overconfident predictions, rendering them poorly calibrated.
We propose a new train-time technique for calibrating modern object detection methods.
arXiv Detail & Related papers (2023-06-14T06:14:16Z) - Bridging Precision and Confidence: A Train-Time Loss for Calibrating
Object Detection [58.789823426981044]
We propose a novel auxiliary loss formulation that aims to align the class confidence of bounding boxes with the accurateness of predictions.
Our results reveal that our train-time loss surpasses strong calibration baselines in reducing calibration error for both in and out-domain scenarios.
arXiv Detail & Related papers (2023-03-25T08:56:21Z) - Confidence Calibration for Object Detection and Segmentation [6.700433100198165]
This chapter focuses on the investigation of confidence calibration for object detection and segmentation models.
We introduce the concept of multivariate confidence calibration that is an extension of well-known calibration methods.
We show that especially object detection as well as instance segmentation models are intrinsically miscalibrated.
arXiv Detail & Related papers (2022-02-25T15:59:51Z) - Decoupled Adaptation for Cross-Domain Object Detection [69.5852335091519]
Cross-domain object detection is more challenging than object classification.
D-adapt achieves a state-of-the-art results on four cross-domain object detection tasks.
arXiv Detail & Related papers (2021-10-06T08:43:59Z) - Unsupervised Out-of-Domain Detection via Pre-trained Transformers [56.689635664358256]
Out-of-domain inputs can lead to unpredictable outputs and sometimes catastrophic safety issues.
Our work tackles the problem of detecting out-of-domain samples with only unsupervised in-domain data.
Two domain-specific fine-tuning approaches are further proposed to boost detection accuracy.
arXiv Detail & Related papers (2021-06-02T05:21:25Z) - Multivariate Confidence Calibration for Object Detection [7.16879432974126]
We present a novel framework to measure and calibrate biased confidence estimates of object detection methods.
Our approach allows, for the first time, to obtain calibrated confidence estimates with respect to image location and box scale.
We show that our developed methods outperform state-of-the-art calibration models for the task of object detection.
arXiv Detail & Related papers (2020-04-28T14:17:41Z) - Intra Order-preserving Functions for Calibration of Multi-Class Neural
Networks [54.23874144090228]
A common approach is to learn a post-hoc calibration function that transforms the output of the original network into calibrated confidence scores.
Previous post-hoc calibration techniques work only with simple calibration functions.
We propose a new neural network architecture that represents a class of intra order-preserving functions.
arXiv Detail & Related papers (2020-03-15T12:57:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.