Bayesian Deep Learning for Remaining Useful Life Estimation via Stein
Variational Gradient Descent
- URL: http://arxiv.org/abs/2402.01098v1
- Date: Fri, 2 Feb 2024 02:21:06 GMT
- Title: Bayesian Deep Learning for Remaining Useful Life Estimation via Stein
Variational Gradient Descent
- Authors: Luca Della Libera, Jacopo Andreoli, Davide Dalle Pezze, Mirco
Ravanelli, Gian Antonio Susto
- Abstract summary: We show that Bayesian deep learning models trained via Stein variational gradient descent consistently outperform with respect to convergence speed and predictive performance.
We propose a method to enhance performance based on the uncertainty information provided by the Bayesian models.
- Score: 14.784809634505903
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A crucial task in predictive maintenance is estimating the remaining useful
life of physical systems. In the last decade, deep learning has improved
considerably upon traditional model-based and statistical approaches in terms
of predictive performance. However, in order to optimally plan maintenance
operations, it is also important to quantify the uncertainty inherent to the
predictions. This issue can be addressed by turning standard frequentist neural
networks into Bayesian neural networks, which are naturally capable of
providing confidence intervals around the estimates. Several methods exist for
training those models. Researchers have focused mostly on parametric
variational inference and sampling-based techniques, which notoriously suffer
from limited approximation power and large computational burden, respectively.
In this work, we use Stein variational gradient descent, a recently proposed
algorithm for approximating intractable distributions that overcomes the
drawbacks of the aforementioned techniques. In particular, we show through
experimental studies on simulated run-to-failure turbofan engine degradation
data that Bayesian deep learning models trained via Stein variational gradient
descent consistently outperform with respect to convergence speed and
predictive performance both the same models trained via parametric variational
inference and their frequentist counterparts trained via backpropagation.
Furthermore, we propose a method to enhance performance based on the
uncertainty information provided by the Bayesian models. We release the source
code at https://github.com/lucadellalib/bdl-rul-svgd.
Related papers
- Learning Sample Difficulty from Pre-trained Models for Reliable
Prediction [55.77136037458667]
We propose to utilize large-scale pre-trained models to guide downstream model training with sample difficulty-aware entropy regularization.
We simultaneously improve accuracy and uncertainty calibration across challenging benchmarks.
arXiv Detail & Related papers (2023-04-20T07:29:23Z) - Surrogate uncertainty estimation for your time series forecasting black-box: learn when to trust [2.0393477576774752]
Our research introduces a method for uncertainty estimation.
It enhances any base regression model with reasonable uncertainty estimates.
Using various time-series forecasting data, we found that our surrogate model-based technique delivers significantly more accurate confidence intervals.
arXiv Detail & Related papers (2023-02-06T14:52:56Z) - Training Discrete Deep Generative Models via Gapped Straight-Through
Estimator [72.71398034617607]
We propose a Gapped Straight-Through ( GST) estimator to reduce the variance without incurring resampling overhead.
This estimator is inspired by the essential properties of Straight-Through Gumbel-Softmax.
Experiments demonstrate that the proposed GST estimator enjoys better performance compared to strong baselines on two discrete deep generative modeling tasks.
arXiv Detail & Related papers (2022-06-15T01:46:05Z) - Rethinking Bayesian Learning for Data Analysis: The Art of Prior and
Inference in Sparsity-Aware Modeling [20.296566563098057]
Sparse modeling for signal processing and machine learning has been at the focus of scientific research for over two decades.
This article reviews some recent advances in incorporating sparsity-promoting priors into three popular data modeling tools.
arXiv Detail & Related papers (2022-05-28T00:43:52Z) - Deep Active Learning with Noise Stability [24.54974925491753]
Uncertainty estimation for unlabeled data is crucial to active learning.
We propose a novel algorithm that leverages noise stability to estimate data uncertainty.
Our method is generally applicable in various tasks, including computer vision, natural language processing, and structural data analysis.
arXiv Detail & Related papers (2022-05-26T13:21:01Z) - DeepBayes -- an estimator for parameter estimation in stochastic
nonlinear dynamical models [11.917949887615567]
We propose DeepBayes estimators that leverage the power of deep recurrent neural networks in learning an estimator.
The deep recurrent neural network architectures can be trained offline and ensure significant time savings during inference.
We demonstrate the applicability of our proposed method on different example models and perform detailed comparisons with state-of-the-art approaches.
arXiv Detail & Related papers (2022-05-04T18:12:17Z) - Uncertainty-Aware Time-to-Event Prediction using Deep Kernel Accelerated
Failure Time Models [11.171712535005357]
We propose Deep Kernel Accelerated Failure Time models for the time-to-event prediction task.
Our model shows better point estimate performance than recurrent neural network based baselines in experiments on two real-world datasets.
arXiv Detail & Related papers (2021-07-26T14:55:02Z) - Quantifying Uncertainty in Deep Spatiotemporal Forecasting [67.77102283276409]
We describe two types of forecasting problems: regular grid-based and graph-based.
We analyze UQ methods from both the Bayesian and the frequentist point view, casting in a unified framework via statistical decision theory.
Through extensive experiments on real-world road network traffic, epidemics, and air quality forecasting tasks, we reveal the statistical computational trade-offs for different UQ methods.
arXiv Detail & Related papers (2021-05-25T14:35:46Z) - A Bayesian Perspective on Training Speed and Model Selection [51.15664724311443]
We show that a measure of a model's training speed can be used to estimate its marginal likelihood.
We verify our results in model selection tasks for linear models and for the infinite-width limit of deep neural networks.
Our results suggest a promising new direction towards explaining why neural networks trained with gradient descent are biased towards functions that generalize well.
arXiv Detail & Related papers (2020-10-27T17:56:14Z) - Unlabelled Data Improves Bayesian Uncertainty Calibration under
Covariate Shift [100.52588638477862]
We develop an approximate Bayesian inference scheme based on posterior regularisation.
We demonstrate the utility of our method in the context of transferring prognostic models of prostate cancer across globally diverse populations.
arXiv Detail & Related papers (2020-06-26T13:50:19Z) - Extrapolation for Large-batch Training in Deep Learning [72.61259487233214]
We show that a host of variations can be covered in a unified framework that we propose.
We prove the convergence of this novel scheme and rigorously evaluate its empirical performance on ResNet, LSTM, and Transformer.
arXiv Detail & Related papers (2020-06-10T08:22:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.