Unsupervised Anomaly Detection with Rejection
- URL: http://arxiv.org/abs/2305.13189v2
- Date: Tue, 17 Oct 2023 20:25:05 GMT
- Title: Unsupervised Anomaly Detection with Rejection
- Authors: Lorenzo Perini, Jesse Davis
- Abstract summary: Anomaly detectors learn a decision boundary by employing intuitions, which are hard to verify in practice.
A way to combat this is by allowing the detector to reject examples with high uncertainty.
This requires employing a confidence metric that captures the distance to the decision boundary and setting a rejection threshold to reject low-confidence predictions.
- Score: 19.136286864839846
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Anomaly detection aims at detecting unexpected behaviours in the data.
Because anomaly detection is usually an unsupervised task, traditional anomaly
detectors learn a decision boundary by employing heuristics based on
intuitions, which are hard to verify in practice. This introduces some
uncertainty, especially close to the decision boundary, that may reduce the
user trust in the detector's predictions. A way to combat this is by allowing
the detector to reject examples with high uncertainty (Learning to Reject).
This requires employing a confidence metric that captures the distance to the
decision boundary and setting a rejection threshold to reject low-confidence
predictions. However, selecting a proper metric and setting the rejection
threshold without labels are challenging tasks. In this paper, we solve these
challenges by setting a constant rejection threshold on the stability metric
computed by ExCeeD. Our insight relies on a theoretical analysis of such a
metric. Moreover, setting a constant threshold results in strong guarantees: we
estimate the test rejection rate, and derive a theoretical upper bound for both
the rejection rate and the expected prediction cost. Experimentally, we show
that our method outperforms some metric-based methods.
Related papers
- Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection [1.8990839669542954]
We propose a cost-sensitive framework for object detection tailored to user-defined budgets.
We derive minimum thresholding requirements to prevent performance degradation.
We automate and optimize the thresholding process to maximize the failure recognition rate.
arXiv Detail & Related papers (2024-04-26T14:03:55Z) - Revisiting Confidence Estimation: Towards Reliable Failure Prediction [53.79160907725975]
We find a general, widely existing but actually-neglected phenomenon that most confidence estimation methods are harmful for detecting misclassification errors.
We propose to enlarge the confidence gap by finding flat minima, which yields state-of-the-art failure prediction performance.
arXiv Detail & Related papers (2024-03-05T11:44:14Z) - One step closer to unbiased aleatoric uncertainty estimation [71.55174353766289]
We propose a new estimation method by actively de-noising the observed data.
By conducting a broad range of experiments, we demonstrate that our proposed approach provides a much closer approximation to the actual data uncertainty than the standard method.
arXiv Detail & Related papers (2023-12-16T14:59:11Z) - The Implicit Delta Method [61.36121543728134]
In this paper, we propose an alternative, the implicit delta method, which works by infinitesimally regularizing the training loss of uncertainty.
We show that the change in the evaluation due to regularization is consistent for the variance of the evaluation estimator, even when the infinitesimal change is approximated by a finite difference.
arXiv Detail & Related papers (2022-11-11T19:34:17Z) - A Review of Uncertainty Calibration in Pretrained Object Detectors [5.440028715314566]
We investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting.
We propose a framework to ensure a fair, unbiased, and repeatable evaluation.
We deliver novel insights into why poor detector calibration emerges.
arXiv Detail & Related papers (2022-10-06T14:06:36Z) - Holistic Approach to Measure Sample-level Adversarial Vulnerability and
its Utility in Building Trustworthy Systems [17.707594255626216]
Adversarial attack perturbs an image with an imperceptible noise, leading to incorrect model prediction.
We propose a holistic approach for quantifying adversarial vulnerability of a sample by combining different perspectives.
We demonstrate that by reliably estimating adversarial vulnerability at the sample level, it is possible to develop a trustworthy system.
arXiv Detail & Related papers (2022-05-05T12:36:17Z) - Trajectory Forecasting from Detection with Uncertainty-Aware Motion
Encoding [121.66374635092097]
Trajectories obtained from object detection and tracking are inevitably noisy.
We propose a trajectory predictor directly based on detection results without relying on explicitly formed trajectories.
arXiv Detail & Related papers (2022-02-03T09:09:56Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z) - Gradient-Based Quantification of Epistemic Uncertainty for Deep Object
Detectors [8.029049649310213]
We introduce novel gradient-based uncertainty metrics and investigate them for different object detection architectures.
Experiments show significant improvements in true positive / false positive discrimination and prediction of intersection over union.
We also find improvement over Monte-Carlo dropout uncertainty metrics and further significant boosts by aggregating different sources of uncertainty metrics.
arXiv Detail & Related papers (2021-07-09T16:04:11Z) - Learning Uncertainty For Safety-Oriented Semantic Segmentation In
Autonomous Driving [77.39239190539871]
We show how uncertainty estimation can be leveraged to enable safety critical image segmentation in autonomous driving.
We introduce a new uncertainty measure based on disagreeing predictions as measured by a dissimilarity function.
We show experimentally that our proposed approach is much less computationally intensive at inference time than competing methods.
arXiv Detail & Related papers (2021-05-28T09:23:05Z) - Labels Are Not Perfect: Improving Probabilistic Object Detection via
Label Uncertainty [12.531126969367774]
We leverage our previously proposed method for estimating uncertainty inherent in ground truth bounding box parameters.
Experimental results on the KITTI dataset show that our method surpasses both the baseline model and the models based on simple uncertaintys by up to 3.6% in terms of Average Precision.
arXiv Detail & Related papers (2020-08-10T14:49:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.