Per-frame mAP Prediction for Continuous Performance Monitoring of Object
Detection During Deployment
- URL: http://arxiv.org/abs/2009.08650v2
- Date: Mon, 16 Nov 2020 07:11:29 GMT
- Title: Per-frame mAP Prediction for Continuous Performance Monitoring of Object
Detection During Deployment
- Authors: Quazi Marufur Rahman and Niko S\"underhauf and Feras Dayoub
- Abstract summary: We propose an introspection approach to performance monitoring during deployment.
We do so by predicting when the per-frame mean average precision drops below a critical threshold.
We quantitatively evaluate and demonstrate our method's ability to reduce risk by trading off making an incorrect decision.
- Score: 6.166295570030645
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Performance monitoring of object detection is crucial for safety-critical
applications such as autonomous vehicles that operate under varying and complex
environmental conditions. Currently, object detectors are evaluated using
summary metrics based on a single dataset that is assumed to be representative
of all future deployment conditions. In practice, this assumption does not
hold, and the performance fluctuates as a function of the deployment
conditions. To address this issue, we propose an introspection approach to
performance monitoring during deployment without the need for ground truth
data. We do so by predicting when the per-frame mean average precision drops
below a critical threshold using the detector's internal features. We
quantitatively evaluate and demonstrate our method's ability to reduce risk by
trading off making an incorrect decision by raising the alarm and absenting
from detection.
Related papers
- Deployment Prior Injection for Run-time Calibratable Object Detection [58.636806402337776]
We introduce an additional graph input to the detector, where the graph represents the deployment context prior.
During the test phase, any suitable deployment context prior can be injected into the detector via graph edits.
Even if the deployment prior is unknown, the detector can self-calibrate using deployment prior approximated using its own predictions.
arXiv Detail & Related papers (2024-02-27T04:56:04Z) - A Review of Uncertainty Calibration in Pretrained Object Detectors [5.440028715314566]
We investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting.
We propose a framework to ensure a fair, unbiased, and repeatable evaluation.
We deliver novel insights into why poor detector calibration emerges.
arXiv Detail & Related papers (2022-10-06T14:06:36Z) - Object Detection as Probabilistic Set Prediction [3.7599363231894176]
We present a proper scoring rule for evaluating and training probabilistic object detectors.
Our results indicate that the training of existing detectors is optimized toward non-probabilistic metrics.
arXiv Detail & Related papers (2022-03-15T15:13:52Z) - Tracking the risk of a deployed model and detecting harmful distribution
shifts [105.27463615756733]
In practice, it may make sense to ignore benign shifts, under which the performance of a deployed model does not degrade substantially.
We argue that a sensible method for firing off a warning has to both (a) detect harmful shifts while ignoring benign ones, and (b) allow continuous monitoring of model performance without increasing the false alarm rate.
arXiv Detail & Related papers (2021-10-12T17:21:41Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - WSSOD: A New Pipeline for Weakly- and Semi-Supervised Object Detection [75.80075054706079]
We propose a weakly- and semi-supervised object detection framework (WSSOD)
An agent detector is first trained on a joint dataset and then used to predict pseudo bounding boxes on weakly-annotated images.
The proposed framework demonstrates remarkable performance on PASCAL-VOC and MSCOCO benchmark, achieving a high performance comparable to those obtained in fully-supervised settings.
arXiv Detail & Related papers (2021-05-21T11:58:50Z) - Robust Object Detection via Instance-Level Temporal Cycle Confusion [89.1027433760578]
We study the effectiveness of auxiliary self-supervised tasks to improve the out-of-distribution generalization of object detectors.
Inspired by the principle of maximum entropy, we introduce a novel self-supervised task, instance-level temporal cycle confusion (CycConf)
For each object, the task is to find the most different object proposals in the adjacent frame in a video and then cycle back to itself for self-supervision.
arXiv Detail & Related papers (2021-04-16T21:35:08Z) - Online Monitoring of Object Detection Performance During Deployment [6.166295570030645]
We introduce a cascaded neural network that monitors the performance of the object detector by predicting the quality of its mean average precision (mAP) on a sliding window of the input frames.
We evaluate our proposed approach using different combinations of autonomous driving datasets and object detectors.
arXiv Detail & Related papers (2020-11-16T07:01:43Z) - Sequential Anomaly Detection using Inverse Reinforcement Learning [23.554584457413483]
We propose an end-to-end framework for sequential anomaly detection using inverse reinforcement learning (IRL)
We use a neural network to represent a reward function. Using a learned reward function, we evaluate whether a new observation from the target agent follows a normal pattern.
The empirical study on publicly available real-world data shows that our proposed method is effective in identifying anomalies.
arXiv Detail & Related papers (2020-04-22T05:17:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.