Learning Uncertainty with Artificial Neural Networks for Improved
Predictive Process Monitoring
- URL: http://arxiv.org/abs/2206.06317v1
- Date: Mon, 13 Jun 2022 17:05:27 GMT
- Title: Learning Uncertainty with Artificial Neural Networks for Improved
Predictive Process Monitoring
- Authors: Hans Weytjens and Jochen De Weerdt
- Abstract summary: We distinguish two types of learnable uncertainty: model uncertainty due to a lack of training data and noise-induced observational uncertainty.
Our contribution is to apply these uncertainty concepts to predictive process monitoring tasks to train uncertainty-based models to predict the remaining time and outcomes.
- Score: 0.114219428942199
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The inability of artificial neural networks to assess the uncertainty of
their predictions is an impediment to their widespread use. We distinguish two
types of learnable uncertainty: model uncertainty due to a lack of training
data and noise-induced observational uncertainty. Bayesian neural networks use
solid mathematical foundations to learn the model uncertainties of their
predictions. The observational uncertainty can be calculated by adding one
layer to these networks and augmenting their loss functions. Our contribution
is to apply these uncertainty concepts to predictive process monitoring tasks
to train uncertainty-based models to predict the remaining time and outcomes.
Our experiments show that uncertainty estimates allow more and less accurate
predictions to be differentiated and confidence intervals to be constructed in
both regression and classification tasks. These conclusions remain true even in
early stages of running processes. Moreover, the deployed techniques are fast
and produce more accurate predictions. The learned uncertainty could increase
users' confidence in their process prediction systems, promote better
cooperation between humans and these systems, and enable earlier
implementations with smaller datasets.
Related papers
- Tractable Function-Space Variational Inference in Bayesian Neural
Networks [72.97620734290139]
A popular approach for estimating the predictive uncertainty of neural networks is to define a prior distribution over the network parameters.
We propose a scalable function-space variational inference method that allows incorporating prior information.
We show that the proposed method leads to state-of-the-art uncertainty estimation and predictive performance on a range of prediction tasks.
arXiv Detail & Related papers (2023-12-28T18:33:26Z) - Evidential Deep Learning: Enhancing Predictive Uncertainty Estimation
for Earth System Science Applications [0.32302664881848275]
Evidential deep learning is a technique that extends parametric deep learning to higher-order distributions.
This study compares the uncertainty derived from evidential neural networks to those obtained from ensembles.
We show evidential deep learning models attaining predictive accuracy rivaling standard methods, while robustly quantifying both sources of uncertainty.
arXiv Detail & Related papers (2023-09-22T23:04:51Z) - Quantification of Predictive Uncertainty via Inference-Time Sampling [57.749601811982096]
We propose a post-hoc sampling strategy for estimating predictive uncertainty accounting for data ambiguity.
The method can generate different plausible outputs for a given input and does not assume parametric forms of predictive distributions.
arXiv Detail & Related papers (2023-08-03T12:43:21Z) - Fast Uncertainty Estimates in Deep Learning Interatomic Potentials [0.0]
We propose a method to estimate the predictive uncertainty based on a single neural network without the need for an ensemble.
We demonstrate that the quality of the uncertainty estimates matches those obtained from deep ensembles.
arXiv Detail & Related papers (2022-11-17T20:13:39Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Training Uncertainty-Aware Classifiers with Conformalized Deep Learning [7.837881800517111]
Deep neural networks are powerful tools to detect hidden patterns in data and leverage them to make predictions, but they are not designed to understand uncertainty.
We develop a novel training algorithm that can lead to more dependable uncertainty estimates, without sacrificing predictive power.
arXiv Detail & Related papers (2022-05-12T05:08:10Z) - Robust uncertainty estimates with out-of-distribution pseudo-inputs
training [0.0]
We propose to explicitly train the uncertainty predictor where we are not given data to make it reliable.
As one cannot train without data, we provide mechanisms for generating pseudo-inputs in informative low-density regions of the input space.
With a holistic evaluation, we demonstrate that this yields robust and interpretable predictions of uncertainty while retaining state-of-the-art performance on diverse tasks.
arXiv Detail & Related papers (2022-01-15T17:15:07Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Learning Uncertainty with Artificial Neural Networks for Improved
Remaining Time Prediction of Business Processes [0.15229257192293202]
This paper is the first to apply these techniques to predictive process monitoring.
We found that they contribute towards more accurate predictions and work quickly.
This leads to many interesting applications, enables an earlier adoption of prediction systems with smaller datasets and fosters a better cooperation with humans.
arXiv Detail & Related papers (2021-05-12T10:18:57Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Learning to Predict Error for MRI Reconstruction [67.76632988696943]
We demonstrate that predictive uncertainty estimated by the current methods does not highly correlate with prediction error.
We propose a novel method that estimates the target labels and magnitude of the prediction error in two steps.
arXiv Detail & Related papers (2020-02-13T15:55:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.