Confidence Composition for Monitors of Verification Assumptions
- URL: http://arxiv.org/abs/2111.03782v1
- Date: Wed, 3 Nov 2021 18:14:35 GMT
- Title: Confidence Composition for Monitors of Verification Assumptions
- Authors: Ivan Ruchkin, Matthew Cleaveland, Radoslav Ivanov, Pengyuan Lu, Taylor
Carpenter, Oleg Sokolsky, Insup Lee
- Abstract summary: We propose a three-step framework for monitoring the confidence in verification assumptions.
In two case studies, we demonstrate that the composed monitors improve over their constituents and successfully predict safety violations.
- Score: 3.500426151907193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Closed-loop verification of cyber-physical systems with neural network
controllers offers strong safety guarantees under certain assumptions. It is,
however, difficult to determine whether these guarantees apply at run time
because verification assumptions may be violated. To predict safety violations
in a verified system, we propose a three-step framework for monitoring the
confidence in verification assumptions. First, we represent the sufficient
condition for verified safety with a propositional logical formula over
assumptions. Second, we build calibrated confidence monitors that evaluate the
probability that each assumption holds. Third, we obtain the confidence in the
verification guarantees by composing the assumption monitors using a
composition function suitable for the logical formula. Our framework provides
theoretical bounds on the calibration and conservatism of compositional
monitors. In two case studies, we demonstrate that the composed monitors
improve over their constituents and successfully predict safety violations.
Related papers
- ConU: Conformal Uncertainty in Large Language Models with Correctness Coverage Guarantees [68.33498595506941]
Uncertainty quantification in natural language generation (NLG) tasks remains an open challenge.
This study investigates adapting conformal prediction (CP), which can convert any measure of uncertainty into rigorous theoretical guarantees.
We propose a sampling-based uncertainty measure leveraging self-consistency and develop a conformal uncertainty criterion.
arXiv Detail & Related papers (2024-06-29T17:33:07Z) - Safety Margins for Reinforcement Learning [74.13100479426424]
We show how to leverage proxy criticality metrics to generate safety margins.
We evaluate our approach on learned policies from APE-X and A3C within an Atari environment.
arXiv Detail & Related papers (2023-07-25T16:49:54Z) - Confident Object Detection via Conformal Prediction and Conformal Risk
Control: an Application to Railway Signaling [0.0]
We demonstrate the use of the conformal prediction framework to construct reliable predictors for detecting railway signals.
Our approach is based on a novel dataset that includes images taken from the perspective of a train operator and state-of-the-art object detectors.
arXiv Detail & Related papers (2023-04-12T08:10:13Z) - Safe Perception-Based Control under Stochastic Sensor Uncertainty using
Conformal Prediction [27.515056747751053]
We propose a perception-based control framework that quantifies estimation uncertainty of perception maps.
We also integrate these uncertainty representations into the control design.
We demonstrate the effectiveness of our proposed perception-based controller for a LiDAR-enabled F1/10th car.
arXiv Detail & Related papers (2023-04-01T01:45:53Z) - Unifying Evaluation of Machine Learning Safety Monitors [0.0]
runtime monitors have been developed to detect prediction errors and keep the system in a safe state during operations.
This paper introduces three unified safety-oriented metrics, representing the safety benefits of the monitor (Safety Gain) and the remaining safety gaps after using it (Residual Hazard)
Three use-cases (classification, drone landing, and autonomous driving) are used to demonstrate how metrics from the literature can be expressed in terms of the proposed metrics.
arXiv Detail & Related papers (2022-08-31T07:17:42Z) - Recursively Feasible Probabilistic Safe Online Learning with Control
Barrier Functions [63.18590014127461]
This paper introduces a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We study the feasibility of the resulting robust safety-critical controller.
We then use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Demonstrating Software Reliability using Possibly Correlated Tests:
Insights from a Conservative Bayesian Approach [2.152298082788376]
We formalise informal notions of "doubting" that the executions are independent.
We develop techniques that reveal the extent to which independence assumptions can undermine conservatism in assessments.
arXiv Detail & Related papers (2022-08-16T20:27:47Z) - Safe Reinforcement Learning via Confidence-Based Filters [78.39359694273575]
We develop a control-theoretic approach for certifying state safety constraints for nominal policies learned via standard reinforcement learning techniques.
We provide formal safety guarantees, and empirically demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2022-07-04T11:43:23Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Trust but Verify: Assigning Prediction Credibility by Counterfactual
Constrained Learning [123.3472310767721]
Prediction credibility measures are fundamental in statistics and machine learning.
These measures should account for the wide variety of models used in practice.
The framework developed in this work expresses the credibility as a risk-fit trade-off.
arXiv Detail & Related papers (2020-11-24T19:52:38Z) - Synthesizing Safe Policies under Probabilistic Constraints with
Reinforcement Learning and Bayesian Model Checking [4.797216015572358]
We introduce a framework for specification of requirements for reinforcement learners in constrained settings.
We show that an agent's confidence in constraint satisfaction provides a useful signal for balancing optimization and safety in the learning process.
arXiv Detail & Related papers (2020-05-08T08:11:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.