Bayesian autoencoders with uncertainty quantification: Towards
trustworthy anomaly detection
- URL: http://arxiv.org/abs/2202.12653v1
- Date: Fri, 25 Feb 2022 12:20:04 GMT
- Title: Bayesian autoencoders with uncertainty quantification: Towards
trustworthy anomaly detection
- Authors: Bang Xiang Yong, Alexandra Brintrup
- Abstract summary: In this work, the formulation of Bayesian autoencoders (BAEs) is adopted to quantify the total anomaly uncertainty.
To evaluate the quality of uncertainty, we consider the task of classifying anomalies with the additional option of rejecting predictions of high uncertainty.
Our experiments demonstrate the effectiveness of the BAE and total anomaly uncertainty on a set of benchmark datasets and two real datasets for manufacturing.
- Score: 78.24964622317634
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Despite numerous studies of deep autoencoders (AEs) for unsupervised anomaly
detection, AEs still lack a way to express uncertainty in their predictions,
crucial for ensuring safe and trustworthy machine learning systems in
high-stake applications. Therefore, in this work, the formulation of Bayesian
autoencoders (BAEs) is adopted to quantify the total anomaly uncertainty,
comprising epistemic and aleatoric uncertainties. To evaluate the quality of
uncertainty, we consider the task of classifying anomalies with the additional
option of rejecting predictions of high uncertainty. In addition, we use the
accuracy-rejection curve and propose the weighted average accuracy as a
performance metric. Our experiments demonstrate the effectiveness of the BAE
and total anomaly uncertainty on a set of benchmark datasets and two real
datasets for manufacturing: one for condition monitoring, the other for quality
inspection.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Predicting Safety Misbehaviours in Autonomous Driving Systems using Uncertainty Quantification [8.213390074932132]
This paper evaluates different uncertainty quantification methods from the deep learning domain for the anticipatory testing of safety-critical misbehaviours.
We compute uncertainty scores as the vehicle executes, following the intuition that high uncertainty scores are indicative of unsupported runtime conditions.
In our study, we conducted an evaluation of the effectiveness and computational overhead associated with two uncertainty quantification methods, namely MC- Dropout and Deep Ensembles, for misbehaviour avoidance.
arXiv Detail & Related papers (2024-04-29T10:28:28Z) - Cost-Sensitive Uncertainty-Based Failure Recognition for Object Detection [1.8990839669542954]
We propose a cost-sensitive framework for object detection tailored to user-defined budgets.
We derive minimum thresholding requirements to prevent performance degradation.
We automate and optimize the thresholding process to maximize the failure recognition rate.
arXiv Detail & Related papers (2024-04-26T14:03:55Z) - Decomposing Uncertainty for Large Language Models through Input Clarification Ensembling [69.83976050879318]
In large language models (LLMs), identifying sources of uncertainty is an important step toward improving reliability, trustworthiness, and interpretability.
In this paper, we introduce an uncertainty decomposition framework for LLMs, called input clarification ensembling.
Our approach generates a set of clarifications for the input, feeds them into an LLM, and ensembles the corresponding predictions.
arXiv Detail & Related papers (2023-11-15T05:58:35Z) - Adaptive Uncertainty Estimation via High-Dimensional Testing on Latent
Representations [28.875819909902244]
Uncertainty estimation aims to evaluate the confidence of a trained deep neural network.
Existing uncertainty estimation approaches rely on low-dimensional distributional assumptions.
We propose a new framework using data-adaptive high-dimensional hypothesis testing for uncertainty estimation.
arXiv Detail & Related papers (2023-10-25T12:22:18Z) - Discretization-Induced Dirichlet Posterior for Robust Uncertainty
Quantification on Regression [17.49026509916207]
Uncertainty quantification is critical for deploying deep neural networks (DNNs) in real-world applications.
For vision regression tasks, current AuxUE designs are mainly adopted for aleatoric uncertainty estimates.
We propose a generalized AuxUE scheme for more robust uncertainty quantification on regression tasks.
arXiv Detail & Related papers (2023-08-17T15:54:11Z) - Towards Reliable Medical Image Segmentation by utilizing Evidential Calibrated Uncertainty [52.03490691733464]
We introduce DEviS, an easily implementable foundational model that seamlessly integrates into various medical image segmentation networks.
By leveraging subjective logic theory, we explicitly model probability and uncertainty for the problem of medical image segmentation.
DeviS incorporates an uncertainty-aware filtering module, which utilizes the metric of uncertainty-calibrated error to filter reliable data.
arXiv Detail & Related papers (2023-01-01T05:02:46Z) - A Review of Uncertainty Calibration in Pretrained Object Detectors [5.440028715314566]
We investigate the uncertainty calibration properties of different pretrained object detection architectures in a multi-class setting.
We propose a framework to ensure a fair, unbiased, and repeatable evaluation.
We deliver novel insights into why poor detector calibration emerges.
arXiv Detail & Related papers (2022-10-06T14:06:36Z) - Dense Uncertainty Estimation via an Ensemble-based Conditional Latent
Variable Model [68.34559610536614]
We argue that the aleatoric uncertainty is an inherent attribute of the data and can only be correctly estimated with an unbiased oracle model.
We propose a new sampling and selection strategy at train time to approximate the oracle model for aleatoric uncertainty estimation.
Our results show that our solution achieves both accurate deterministic results and reliable uncertainty estimation.
arXiv Detail & Related papers (2021-11-22T08:54:10Z) - CertainNet: Sampling-free Uncertainty Estimation for Object Detection [65.28989536741658]
Estimating the uncertainty of a neural network plays a fundamental role in safety-critical settings.
In this work, we propose a novel sampling-free uncertainty estimation method for object detection.
We call it CertainNet, and it is the first to provide separate uncertainties for each output signal: objectness, class, location and size.
arXiv Detail & Related papers (2021-10-04T17:59:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.