Confidence Composition for Monitors of Verification Assumptions
- URL: http://arxiv.org/abs/2111.03782v1
- Date: Wed, 3 Nov 2021 18:14:35 GMT
- Title: Confidence Composition for Monitors of Verification Assumptions
- Authors: Ivan Ruchkin, Matthew Cleaveland, Radoslav Ivanov, Pengyuan Lu, Taylor
Carpenter, Oleg Sokolsky, Insup Lee
- Abstract summary: We propose a three-step framework for monitoring the confidence in verification assumptions.
In two case studies, we demonstrate that the composed monitors improve over their constituents and successfully predict safety violations.
- Score: 3.500426151907193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Closed-loop verification of cyber-physical systems with neural network
controllers offers strong safety guarantees under certain assumptions. It is,
however, difficult to determine whether these guarantees apply at run time
because verification assumptions may be violated. To predict safety violations
in a verified system, we propose a three-step framework for monitoring the
confidence in verification assumptions. First, we represent the sufficient
condition for verified safety with a propositional logical formula over
assumptions. Second, we build calibrated confidence monitors that evaluate the
probability that each assumption holds. Third, we obtain the confidence in the
verification guarantees by composing the assumption monitors using a
composition function suitable for the logical formula. Our framework provides
theoretical bounds on the calibration and conservatism of compositional
monitors. In two case studies, we demonstrate that the composed monitors
improve over their constituents and successfully predict safety violations.
Related papers
- Know Where You're Uncertain When Planning with Multimodal Foundation Models: A Formal Framework [54.40508478482667]
We present a comprehensive framework to disentangle, quantify, and mitigate uncertainty in perception and plan generation.
We propose methods tailored to the unique properties of perception and decision-making.
We show that our uncertainty disentanglement framework reduces variability by up to 40% and enhances task success rates by 5% compared to baselines.
arXiv Detail & Related papers (2024-11-03T17:32:00Z) - Provably Reliable Conformal Prediction Sets in the Presence of Data Poisoning [53.42244686183879]
Conformal prediction provides model-agnostic and distribution-free uncertainty quantification.
Yet, conformal prediction is not reliable under poisoning attacks where adversaries manipulate both training and calibration data.
We propose reliable prediction sets (RPS): the first efficient method for constructing conformal prediction sets with provable reliability guarantees under poisoning.
arXiv Detail & Related papers (2024-10-13T15:37:11Z) - Safety Margins for Reinforcement Learning [53.10194953873209]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Confident Object Detection via Conformal Prediction and Conformal Risk
Control: an Application to Railway Signaling [0.0]
We demonstrate the use of the conformal prediction framework to construct reliable predictors for detecting railway signals.
Our approach is based on a novel dataset that includes images taken from the perspective of a train operator and state-of-the-art object detectors.
arXiv Detail & Related papers (2023-04-12T08:10:13Z) - Unifying Evaluation of Machine Learning Safety Monitors [0.0]
runtime monitors have been developed to detect prediction errors and keep the system in a safe state during operations.
This paper introduces three unified safety-oriented metrics, representing the safety benefits of the monitor (Safety Gain) and the remaining safety gaps after using it (Residual Hazard)
Three use-cases (classification, drone landing, and autonomous driving) are used to demonstrate how metrics from the literature can be expressed in terms of the proposed metrics.
arXiv Detail & Related papers (2022-08-31T07:17:42Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Demonstrating Software Reliability using Possibly Correlated Tests:
Insights from a Conservative Bayesian Approach [2.152298082788376]
We formalise informal notions of "doubting" that the executions are independent.
We develop techniques that reveal the extent to which independence assumptions can undermine conservatism in assessments.
arXiv Detail & Related papers (2022-08-16T20:27:47Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Synthesizing Safe Policies under Probabilistic Constraints with
Reinforcement Learning and Bayesian Model Checking [4.797216015572358]
We introduce a framework for specification of requirements for reinforcement learners in constrained settings.
We show that an agent's confidence in constraint satisfaction provides a useful signal for balancing optimization and safety in the learning process.
arXiv Detail & Related papers (2020-05-08T08:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.