Forking Uncertainties: Reliable Prediction and Model Predictive Control
with Sequence Models via Conformal Risk Control
- URL: http://arxiv.org/abs/2310.10299v1
- Date: Mon, 16 Oct 2023 11:35:41 GMT
- Title: Forking Uncertainties: Reliable Prediction and Model Predictive Control
with Sequence Models via Conformal Risk Control
- Authors: Matteo Zecchin, Sangwoo Park, Osvaldo Simeone
- Abstract summary: We introduce a novel post-hoc calibration procedure that operates on the predictions produced by any pre-designed probabilistic forecaster to yield reliable error bars.
Unlike the state of the art, PTS-CRC can satisfy reliability definitions beyond coverage.
We experimentally validate the performance of PTS-CRC prediction and control by studying a number of use cases in the context of wireless networking.
- Score: 40.918012779935246
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In many real-world problems, predictions are leveraged to monitor and control
cyber-physical systems, demanding guarantees on the satisfaction of reliability
and safety requirements. However, predictions are inherently uncertain, and
managing prediction uncertainty presents significant challenges in environments
characterized by complex dynamics and forking trajectories. In this work, we
assume access to a pre-designed probabilistic implicit or explicit sequence
model, which may have been obtained using model-based or model-free methods. We
introduce probabilistic time series-conformal risk prediction (PTS-CRC), a
novel post-hoc calibration procedure that operates on the predictions produced
by any pre-designed probabilistic forecaster to yield reliable error bars. In
contrast to existing art, PTS-CRC produces predictive sets based on an ensemble
of multiple prototype trajectories sampled from the sequence model, supporting
the efficient representation of forking uncertainties. Furthermore, unlike the
state of the art, PTS-CRC can satisfy reliability definitions beyond coverage.
This property is leveraged to devise a novel model predictive control (MPC)
framework that addresses open-loop and closed-loop control problems under
general average constraints on the quality or safety of the control policy. We
experimentally validate the performance of PTS-CRC prediction and control by
studying a number of use cases in the context of wireless networking. Across
all the considered tasks, PTS-CRC predictors are shown to provide more
informative predictive sets, as well as safe control policies with larger
returns.
Related papers
- Conformal Generative Modeling with Improved Sample Efficiency through Sequential Greedy Filtering [55.15192437680943]
Generative models lack rigorous statistical guarantees for their outputs.
We propose a sequential conformal prediction method producing prediction sets that satisfy a rigorous statistical guarantee.
This guarantee states that with high probability, the prediction sets contain at least one admissible (or valid) example.
arXiv Detail & Related papers (2024-10-02T15:26:52Z) - From Conformal Predictions to Confidence Regions [1.4272411349249627]
We introduce CCR, which employs a combination of conformal prediction intervals for the model outputs to establish confidence regions for model parameters.
We present coverage guarantees under minimal assumptions on noise and that is valid in finite sample regime.
Our approach is applicable to both split conformal predictions and black-box methodologies including full or cross-conformal approaches.
arXiv Detail & Related papers (2024-05-28T21:33:12Z) - Practical Probabilistic Model-based Deep Reinforcement Learning by
Integrating Dropout Uncertainty and Trajectory Sampling [7.179313063022576]
This paper addresses the prediction stability, prediction accuracy and control capability of the current probabilistic model-based reinforcement learning (MBRL) built on neural networks.
A novel approach dropout-based probabilistic ensembles with trajectory sampling (DPETS) is proposed.
arXiv Detail & Related papers (2023-09-20T06:39:19Z) - Model Predictive Control with Gaussian-Process-Supported Dynamical
Constraints for Autonomous Vehicles [82.65261980827594]
We propose a model predictive control approach for autonomous vehicles that exploits learned Gaussian processes for predicting human driving behavior.
A multi-mode predictive control approach considers the possible intentions of the human drivers.
arXiv Detail & Related papers (2023-03-08T17:14:57Z) - Probabilities Are Not Enough: Formal Controller Synthesis for Stochastic
Dynamical Models with Epistemic Uncertainty [68.00748155945047]
Capturing uncertainty in models of complex dynamical systems is crucial to designing safe controllers.
Several approaches use formal abstractions to synthesize policies that satisfy temporal specifications related to safety and reachability.
Our contribution is a novel abstraction-based controller method for continuous-state models with noise, uncertain parameters, and external disturbances.
arXiv Detail & Related papers (2022-10-12T07:57:03Z) - Pointwise Feasibility of Gaussian Process-based Safety-Critical Control
under Model Uncertainty [77.18483084440182]
Control Barrier Functions (CBFs) and Control Lyapunov Functions (CLFs) are popular tools for enforcing safety and stability of a controlled system, respectively.
We present a Gaussian Process (GP)-based approach to tackle the problem of model uncertainty in safety-critical controllers that use CBFs and CLFs.
arXiv Detail & Related papers (2021-06-13T23:08:49Z) - Safe Chance Constrained Reinforcement Learning for Batch Process Control [0.0]
Reinforcement Learning (RL) controllers have generated excitement within the control community.
Recent focus on engineering applications has been directed towards the development of safe RL controllers.
arXiv Detail & Related papers (2021-04-23T16:48:46Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Adversarial Attacks on Probabilistic Autoregressive Forecasting Models [7.305979446312823]
We develop an effective generation of adversarial attacks on neural models that output a sequence of probability distributions rather than a sequence of single values.
We demonstrate that our approach can successfully generate attacks with small input perturbations in two challenging tasks.
arXiv Detail & Related papers (2020-03-08T13:08:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.