Deployment Prior Injection for Run-time Calibratable Object Detection
- URL: http://arxiv.org/abs/2402.17207v1
- Date: Tue, 27 Feb 2024 04:56:04 GMT
- Title: Deployment Prior Injection for Run-time Calibratable Object Detection
- Authors: Mo Zhou, Yiding Yang, Haoxiang Li, Vishal M. Patel, Gang Hua
- Abstract summary: We introduce an additional graph input to the detector, where the graph represents the deployment context prior.
During the test phase, any suitable deployment context prior can be injected into the detector via graph edits.
Even if the deployment prior is unknown, the detector can self-calibrate using deployment prior approximated using its own predictions.
- Score: 58.636806402337776
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With a strong alignment between the training and test distributions, object
relation as a context prior facilitates object detection. Yet, it turns into a
harmful but inevitable training set bias upon test distributions that shift
differently across space and time. Nevertheless, the existing detectors cannot
incorporate deployment context prior during the test phase without parameter
update. Such kind of capability requires the model to explicitly learn
disentangled representations with respect to context prior. To achieve this, we
introduce an additional graph input to the detector, where the graph represents
the deployment context prior, and its edge values represent object relations.
Then, the detector behavior is trained to bound to the graph with a modified
training objective. As a result, during the test phase, any suitable deployment
context prior can be injected into the detector via graph edits, hence
calibrating, or "re-biasing" the detector towards the given prior at run-time
without parameter update. Even if the deployment prior is unknown, the detector
can self-calibrate using deployment prior approximated using its own
predictions. Comprehensive experimental results on the COCO dataset, as well as
cross-dataset testing on the Objects365 dataset, demonstrate the effectiveness
of the run-time calibratable detector.
Related papers
- Label-Efficient Object Detection via Region Proposal Network
Pre-Training [58.50615557874024]
We propose a simple pretext task that provides an effective pre-training for the region proposal network (RPN)
In comparison with multi-stage detectors without RPN pre-training, our approach is able to consistently improve downstream task performance.
arXiv Detail & Related papers (2022-11-16T16:28:18Z) - A Review of Uncertainty Calibration in Pretrained Object Detectors [5.440028715314566]
We investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting.
We propose a framework to ensure a fair, unbiased, and repeatable evaluation.
We deliver novel insights into why poor detector calibration emerges.
arXiv Detail & Related papers (2022-10-06T14:06:36Z) - Conformal prediction for the design problem [72.14982816083297]
In many real-world deployments of machine learning, we use a prediction algorithm to choose what data to test next.
In such settings, there is a distinct type of distribution shift between the training and test data.
We introduce a method to quantify predictive uncertainty in such settings.
arXiv Detail & Related papers (2022-02-08T02:59:12Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - Tracking the risk of a deployed model and detecting harmful distribution
shifts [105.27463615756733]
In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially.
We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate.
arXiv Detail & Related papers (2021-10-12T17:21:41Z) - Incorporating Data Uncertainty in Object Tracking Algorithms [2.3204178451683264]
Object tracking methods rely on measurement error models, typically in the form of measurement noise, false positive rates, and missed detection rates.
For detections generated from neural-network processed camera inputs, measurement error statistics are not sufficient to represent the primary source of errors.
We investigate incorporating data uncertainty into object tracking methods such as to improve the ability to track objects, and particularly those which out-of-distribution w.r.t. training data.
arXiv Detail & Related papers (2021-09-22T05:30:46Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - DAP: Detection-Aware Pre-training with Weak Supervision [37.336674323981285]
This paper presents a detection-aware pre-training (DAP) approach for object detection tasks.
We transform a classification dataset into a detection dataset through a weakly supervised object localization method based on Class Activation Maps.
We show that DAP can outperform the traditional classification pre-training in terms of both sample efficiency and convergence speed in downstream detection tasks including VOC and COCO.
arXiv Detail & Related papers (2021-03-30T19:48:30Z) - Estimating and Evaluating Regression Predictive Uncertainty in Deep
Object Detectors [9.273998041238224]
We show that training variance networks with negative log likelihood (NLL) can lead to high entropy predictive distributions.
We propose to use the energy score as a non-local proper scoring rule and find that when used for training, the energy score leads to better calibrated and lower entropy predictive distributions.
arXiv Detail & Related papers (2021-01-13T12:53:54Z) - Per-frame mAP Prediction for Continuous Performance Monitoring of Object
Detection During Deployment [6.166295570030645]
We propose an introspection approach to performance monitoring during deployment.
We do so by predicting when the per-frame mean average precision drops below a critical threshold.
We quantitatively evaluate and demonstrate our method's ability to reduce risk by trading off making an incorrect decision.
arXiv Detail & Related papers (2020-09-18T06:37:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.