Testing Rare Downstream Safety Violations via Upstream Adaptive Sampling
of Perception Error Models
- URL: http://arxiv.org/abs/2209.09674v1
- Date: Tue, 20 Sep 2022 12:26:06 GMT
- Title: Testing Rare Downstream Safety Violations via Upstream Adaptive Sampling
of Perception Error Models
- Authors: Craig Innes and Subramanian Ramamoorthy
- Abstract summary: This paper combines perception error models -- surrogates for a sensor-based detection system -- with state-dependent adaptive importance sampling.
Experiments with an autonomous braking system equipped with an RGB obstacle-detector show that our method can calculate accurate failure probabilities.
- Score: 20.815131169609316
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Testing black-box perceptual-control systems in simulation faces two
difficulties. Firstly, perceptual inputs in simulation lack the fidelity of
real-world sensor inputs. Secondly, for a reasonably accurate perception
system, encountering a rare failure trajectory may require running infeasibly
many simulations. This paper combines perception error models -- surrogates for
a sensor-based detection system -- with state-dependent adaptive importance
sampling. This allows us to efficiently assess the rare failure probabilities
for real-world perceptual control systems within simulation. Our experiments
with an autonomous braking system equipped with an RGB obstacle-detector show
that our method can calculate accurate failure probabilities with an
inexpensive number of simulations. Further, we show how choice of safety metric
can influence the process of learning proposal distributions capable of
reliably sampling high-probability failures.
Related papers
- How adversarial attacks can disrupt seemingly stable accurate classifiers [76.95145661711514]
Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data.
Here, we show that this may be seen as a fundamental feature of classifiers working with high dimensional input data.
We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability.
arXiv Detail & Related papers (2023-09-07T12:02:00Z) - Bayesian Safety Validation for Failure Probability Estimation of Black-Box Systems [34.61865848439637]
Estimating the probability of failure is an important step in the certification of safety-critical systems.
This work frames the problem of black-box safety validation as a Bayesian optimization problem.
The algorithm is designed to search for failures, compute the most-likely failure, and estimate the failure probability over an operating domain.
arXiv Detail & Related papers (2023-05-03T22:22:48Z) - Interactive System-wise Anomaly Detection [66.3766756452743]
Anomaly detection plays a fundamental role in various applications.
It is challenging for existing methods to handle the scenarios where the instances are systems whose characteristics are not readily observed as data.
We develop an end-to-end approach which includes an encoder-decoder module that learns system embeddings.
arXiv Detail & Related papers (2023-04-21T02:20:24Z) - Simulation-to-reality UAV Fault Diagnosis with Deep Learning [20.182411473467656]
We propose a deep learning model that addresses the simulation-to-reality gap in fault diagnosis of quadrotors.
Our proposed approach achieves an accuracy of 96% in detecting propeller faults.
This is the first reliable and efficient method for simulation-to-reality fault diagnosis of quadrotor propellers.
arXiv Detail & Related papers (2023-02-09T02:37:48Z) - Discovering Closed-Loop Failures of Vision-Based Controllers via Reachability Analysis [7.679478106628509]
Machine learning driven image-based controllers allow robotic systems to take intelligent actions based on the visual feedback from their environment.
Existing methods leverage simulation-based testing (or falsification) to find the failures of vision-based controllers.
In this work, we cast the problem of finding closed-loop vision failures as a Hamilton-Jacobi (HJ) reachability problem.
arXiv Detail & Related papers (2022-11-04T20:22:58Z) - Validation of Composite Systems by Discrepancy Propagation [4.588222946914529]
We present a validation method that propagates bounds on distributional discrepancy measures through a composite system.
We demonstrate that our propagation method yields valid and useful bounds for composite systems exhibiting a variety of realistic effects.
arXiv Detail & Related papers (2022-10-21T15:51:54Z) - Fast and Accurate Error Simulation for CNNs against Soft Errors [64.54260986994163]
We present a framework for the reliability analysis of Conal Neural Networks (CNNs) via an error simulation engine.
These error models are defined based on the corruption patterns of the output of the CNN operators induced by faults.
We show that our methodology achieves about 99% accuracy of the fault effects w.r.t. SASSIFI, and a speedup ranging from 44x up to 63x w.r.t.FI, that only implements a limited set of error models.
arXiv Detail & Related papers (2022-06-04T19:45:02Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Generating and Characterizing Scenarios for Safety Testing of Autonomous
Vehicles [86.9067793493874]
We propose efficient mechanisms to characterize and generate testing scenarios using a state-of-the-art driving simulator.
We use our method to characterize real driving data from the Next Generation Simulation (NGSIM) project.
We rank the scenarios by defining metrics based on the complexity of avoiding accidents and provide insights into how the AV could have minimized the probability of incurring an accident.
arXiv Detail & Related papers (2021-03-12T17:00:23Z) - Testing the Safety of Self-driving Vehicles by Simulating Perception and
Prediction [88.0416857308144]
We propose an alternative to sensor simulation, as sensor simulation is expensive and has large domain gaps.
We directly simulate the outputs of the self-driving vehicle's perception and prediction system, enabling realistic motion planning testing.
arXiv Detail & Related papers (2020-08-13T17:20:02Z) - DISCO: Double Likelihood-free Inference Stochastic Control [29.84276469617019]
We propose to leverage the power of modern simulators and recent techniques in Bayesian statistics for likelihood-free inference.
The posterior distribution over simulation parameters is propagated through a potentially non-analytical model of the system.
Experiments show that the controller proposed attained superior performance and robustness on classical control and robotics tasks.
arXiv Detail & Related papers (2020-02-18T05:29:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.