Neural Predictive Monitoring under Partial Observability
- URL: http://arxiv.org/abs/2108.07134v2
- Date: Tue, 17 Aug 2021 12:14:30 GMT
- Title: Neural Predictive Monitoring under Partial Observability
- Authors: Francesca Cairoli, Luca Bortolussi, Nicola Paoletti
- Abstract summary: We present a learning-based method for predictive monitoring (PM) that produces accurate and reliable reachability predictions despite partial observability (PO)
Our method results in highly accurate reachability predictions and error detection, as well as tight prediction regions with guaranteed coverage.
- Score: 4.1316328854247155
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of predictive monitoring (PM), i.e., predicting at
runtime future violations of a system from the current state. We work under the
most realistic settings where only partial and noisy observations of the state
are available at runtime. Such settings directly affect the accuracy and
reliability of the reachability predictions, jeopardizing the safety of the
system. In this work, we present a learning-based method for PM that produces
accurate and reliable reachability predictions despite partial observability
(PO). We build on Neural Predictive Monitoring (NPM), a PM method that uses
deep neural networks for approximating hybrid systems reachability, and extend
it to the PO case. We propose and compare two solutions, an end-to-end
approach, which directly operates on the rough observations, and a two-step
approach, which introduces an intermediate state estimation step. Both
solutions rely on conformal prediction to provide 1) probabilistic guarantees
in the form of prediction regions and 2) sound estimates of predictive
uncertainty. We use the latter to identify unreliable (and likely erroneous)
predictions and to retrain and improve the monitors on these uncertain inputs
(i.e., active learning). Our method results in highly accurate reachability
predictions and error detection, as well as tight prediction regions with
guaranteed coverage.
Related papers
- Learning-Based Approaches to Predictive Monitoring with Conformal
Statistical Guarantees [2.1684857243537334]
This tutorial focuses on efficient methods to predictive monitoring (PM)
PM is the problem of detecting future violations of a given requirement from the current state of a system.
We present a general and comprehensive framework summarizing our approach to the predictive monitoring of CPSs.
arXiv Detail & Related papers (2023-12-04T15:16:42Z) - Ensemble Neural Networks for Remaining Useful Life (RUL) Prediction [0.39287497907611874]
A core part of maintenance planning is a monitoring system that provides a good prognosis on health and degradation.
Here, we propose ensemble neural networks for probabilistic RUL predictions which considers both uncertainties and decouples these two uncertainties.
This method is tested on NASA's turbofan jet engine CMAPSS data-set.
arXiv Detail & Related papers (2023-09-21T19:38:44Z) - Towards Motion Forecasting with Real-World Perception Inputs: Are
End-to-End Approaches Competitive? [93.10694819127608]
We propose a unified evaluation pipeline for forecasting methods with real-world perception inputs.
Our in-depth study uncovers a substantial performance gap when transitioning from curated to perception-based data.
arXiv Detail & Related papers (2023-06-15T17:03:14Z) - Pedestrian Trajectory Forecasting Using Deep Ensembles Under Sensing
Uncertainty [125.41260574344933]
We consider an encoder-decoder based deep ensemble network for capturing both perception and predictive uncertainty simultaneously.
Overall, deep ensembles provided more robust predictions and the consideration of upstream uncertainty further increased the estimation accuracy for the model.
arXiv Detail & Related papers (2023-05-26T04:27:48Z) - Uncertainty estimation of pedestrian future trajectory using Bayesian
approximation [137.00426219455116]
Under dynamic traffic scenarios, planning based on deterministic predictions is not trustworthy.
The authors propose to quantify uncertainty during forecasting using approximation which deterministic approaches fail to capture.
The effect of dropout weights and long-term prediction on future state uncertainty has been studied.
arXiv Detail & Related papers (2022-05-04T04:23:38Z) - Evaluation of Machine Learning Techniques for Forecast Uncertainty
Quantification [0.13999481573773068]
Ensemble forecasting is, so far, the most successful approach to produce relevant forecasts along with an estimation of their uncertainty.
Main limitations of ensemble forecasting are the high computational cost and the difficulty to capture and quantify different sources of uncertainty.
In this work proof-of-concept model experiments are conducted to examine the performance of ANNs trained to predict a corrected state of the system and the state uncertainty using only a single deterministic forecast as input.
arXiv Detail & Related papers (2021-11-29T16:52:17Z) - Dense Uncertainty Estimation [62.23555922631451]
In this paper, we investigate neural networks and uncertainty estimation techniques to achieve both accurate deterministic prediction and reliable uncertainty estimation.
We work on two types of uncertainty estimations solutions, namely ensemble based methods and generative model based methods, and explain their pros and cons while using them in fully/semi/weakly-supervised framework.
arXiv Detail & Related papers (2021-10-13T01:23:48Z) - Propagating State Uncertainty Through Trajectory Forecasting [34.53847097769489]
Trajectory forecasting is surrounded by uncertainty as its inputs are produced by (noisy) upstream perception.
Most trajectory forecasting methods do not account for upstream uncertainty, instead taking only the most-likely values.
We present a novel method for incorporating perceptual state uncertainty in trajectory forecasting, a key component of which is a new statistical distance-based loss function.
arXiv Detail & Related papers (2021-10-07T08:51:16Z) - Improving Uncertainty Calibration via Prior Augmented Data [56.88185136509654]
Neural networks have proven successful at learning from complex data distributions by acting as universal function approximators.
They are often overconfident in their predictions, which leads to inaccurate and miscalibrated probabilistic predictions.
We propose a solution by seeking out regions of feature space where the model is unjustifiably overconfident, and conditionally raising the entropy of those predictions towards that of the prior distribution of the labels.
arXiv Detail & Related papers (2021-02-22T07:02:37Z) - Private Prediction Sets [72.75711776601973]
Machine learning systems need reliable uncertainty quantification and protection of individuals' privacy.
We present a framework that treats these two desiderata jointly.
We evaluate the method on large-scale computer vision datasets.
arXiv Detail & Related papers (2021-02-11T18:59:11Z) - Predictive Business Process Monitoring via Generative Adversarial Nets:
The Case of Next Event Prediction [0.026249027950824504]
This paper proposes a novel adversarial training framework to address the problem of next event prediction.
It works by putting one neural network against the other in a two-player game which leads to predictions that are indistinguishable from the ground truth.
It systematically outperforms all baselines both in terms of accuracy and earliness of the prediction, despite using a simple network architecture and a naive feature encoding.
arXiv Detail & Related papers (2020-03-25T08:31:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.