A Benchmark on Uncertainty Quantification for Deep Learning Prognostics
- URL: http://arxiv.org/abs/2302.04730v1
- Date: Thu, 9 Feb 2023 16:12:47 GMT
- Title: A Benchmark on Uncertainty Quantification for Deep Learning Prognostics
- Authors: Luis Basora, Arthur Viens, Manuel Arias Chao, Xavier Olive
- Abstract summary: We assess some of the latest developments in the field of uncertainty quantification for prognostics deep learning.
This includes the state-of-the-art variational inference algorithms for Bayesian neural networks (BNN) as well as popular alternatives such as Monte Carlo Dropout (MCD), deep ensembles (DE) and heteroscedastic neural networks (HNN)
The performance of the methods is evaluated on a subset of the large NASA NCMAPSS dataset for aircraft engines.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Reliable uncertainty quantification on RUL prediction is crucial for
informative decision-making in predictive maintenance. In this context, we
assess some of the latest developments in the field of uncertainty
quantification for prognostics deep learning. This includes the
state-of-the-art variational inference algorithms for Bayesian neural networks
(BNN) as well as popular alternatives such as Monte Carlo Dropout (MCD), deep
ensembles (DE) and heteroscedastic neural networks (HNN). All the inference
techniques share the same inception deep learning architecture as a functional
model. We performed hyperparameter search to optimize the main variational and
learning parameters of the algorithms. The performance of the methods is
evaluated on a subset of the large NASA NCMAPSS dataset for aircraft engines.
The assessment includes RUL prediction accuracy, the quality of predictive
uncertainty, and the possibility to break down the total predictive uncertainty
into its aleatoric and epistemic parts. The results show no method clearly
outperforms the others in all the situations. Although all methods are close in
terms of accuracy, we find differences in the way they estimate uncertainty.
Thus, DE and MCD generally provide more conservative predictive uncertainty
than BNN. Surprisingly, HNN can achieve strong results without the added
training complexity and extra parameters of the BNN. For tasks like active
learning where a separation of epistemic and aleatoric uncertainty is required,
radial BNN and MCD seem the best options.
Related papers
- Learning Solutions of Stochastic Optimization Problems with Bayesian Neural Networks [4.202961704179733]
In many real-world settings, some of these parameters are unknown or uncertain.
Recent research focuses on predicting the value of unknown parameters using available contextual features.
We propose a novel framework that models uncertainty Neural Networks (BNNs) and propagates this uncertainty into the mathematical solver.
arXiv Detail & Related papers (2024-06-05T09:11:46Z) - Uncertainty Quantification in Multivariable Regression for Material Property Prediction with Bayesian Neural Networks [37.69303106863453]
We introduce an approach for uncertainty quantification (UQ) within physics-informed BNNs.
We present case studies for predicting the creep rupture life of steel alloys.
The most promising framework for creep life prediction is BNNs based on Markov Chain Monte Carlo approximation of the posterior distribution of network parameters.
arXiv Detail & Related papers (2023-11-04T19:40:16Z) - Density Regression and Uncertainty Quantification with Bayesian Deep
Noise Neural Networks [4.376565880192482]
Deep neural network (DNN) models have achieved state-of-the-art predictive accuracy in a wide range of supervised learning applications.
accurately quantifying the uncertainty in DNN predictions remains a challenging task.
We propose the Bayesian Deep Noise Neural Network (B-DeepNoise), which generalizes standard Bayesian DNNs by extending the random noise variable to all hidden layers.
We evaluate B-DeepNoise against existing methods on benchmark regression datasets, demonstrating its superior performance in terms of prediction accuracy, uncertainty quantification accuracy, and uncertainty quantification efficiency.
arXiv Detail & Related papers (2022-06-12T02:47:29Z) - The Unreasonable Effectiveness of Deep Evidential Regression [72.30888739450343]
A new approach with uncertainty-aware regression-based neural networks (NNs) shows promise over traditional deterministic methods and typical Bayesian NNs.
We detail the theoretical shortcomings and analyze the performance on synthetic and real-world data sets, showing that Deep Evidential Regression is a quantification rather than an exact uncertainty.
arXiv Detail & Related papers (2022-05-20T10:10:32Z) - NUQ: Nonparametric Uncertainty Quantification for Deterministic Neural
Networks [151.03112356092575]
We show the principled way to measure the uncertainty of predictions for a classifier based on Nadaraya-Watson's nonparametric estimate of the conditional label distribution.
We demonstrate the strong performance of the method in uncertainty estimation tasks on a variety of real-world image datasets.
arXiv Detail & Related papers (2022-02-07T12:30:45Z) - Robustification of Online Graph Exploration Methods [59.50307752165016]
We study a learning-augmented variant of the classical, notoriously hard online graph exploration problem.
We propose an algorithm that naturally integrates predictions into the well-known Nearest Neighbor (NN) algorithm.
arXiv Detail & Related papers (2021-12-10T10:02:31Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Probabilistic Neighbourhood Component Analysis: Sample Efficient
Uncertainty Estimation in Deep Learning [25.8227937350516]
We show that uncertainty estimation capability of state-of-the-art BNNs and Deep Ensemble models degrades significantly when the amount of training data is small.
We propose a probabilistic generalization of the popular sample-efficient non-parametric kNN approach.
Our approach enables deep kNN to accurately quantify underlying uncertainties in its prediction.
arXiv Detail & Related papers (2020-07-18T21:36:31Z) - Revisiting One-vs-All Classifiers for Predictive Uncertainty and
Out-of-Distribution Detection in Neural Networks [22.34227625637843]
We investigate how the parametrization of the probabilities in discriminative classifiers affects the uncertainty estimates.
We show that one-vs-all formulations can improve calibration on image classification tasks.
arXiv Detail & Related papers (2020-07-10T01:55:02Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Frequentist Uncertainty in Recurrent Neural Networks via Blockwise
Influence Functions [121.10450359856242]
Recurrent neural networks (RNNs) are instrumental in modelling sequential and time-series data.
Existing approaches for uncertainty quantification in RNNs are based predominantly on Bayesian methods.
We develop a frequentist alternative that: (a) does not interfere with model training or compromise its accuracy, (b) applies to any RNN architecture, and (c) provides theoretical coverage guarantees on the estimated uncertainty intervals.
arXiv Detail & Related papers (2020-06-20T22:45:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.