Falsification of Cyber-Physical Systems using Bayesian Optimization
- URL: http://arxiv.org/abs/2209.06735v1
- Date: Wed, 14 Sep 2022 15:52:19 GMT
- Title: Falsification of Cyber-Physical Systems using Bayesian Optimization
- Authors: Zahra Ramezani, Kenan \v{S}ehic, Luigi Nardi, Knut {\AA}kesson
- Abstract summary: Simulation-based falsification of CPSs is a practical testing method that can be used to raise confidence in the correctness of the system.
As each simulation is typically computationally intensive, an important step is to reduce the number of simulations needed to falsify a specification.
We study Bayesian optimization (BO), a sample-efficient method that learns a surrogate model that describes the relationship between the parametrization of possible input signals and the evaluation of the specification.
- Score: 0.5407319151576264
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Cyber-physical systems (CPSs) are usually complex and safety-critical; hence,
it is difficult and important to guarantee that the system's requirements,
i.e., specifications, are fulfilled. Simulation-based falsification of CPSs is
a practical testing method that can be used to raise confidence in the
correctness of the system by only requiring that the system under test can be
simulated. As each simulation is typically computationally intensive, an
important step is to reduce the number of simulations needed to falsify a
specification. We study Bayesian optimization (BO), a sample-efficient method
that learns a surrogate model that describes the relationship between the
parametrization of possible input signals and the evaluation of the
specification.
In this paper, we improve the falsification using BO by; first adopting two
prominent BO methods, one fits local surrogate models, and the other exploits
the user's prior knowledge. Secondly, the formulation of acquisition functions
for falsification is addressed in this paper. Benchmark evaluation shows
significant improvements in using local surrogate models of BO for falsifying
benchmark examples that were previously hard to falsify. Using prior knowledge
in the falsification process is shown to be particularly important when the
simulation budget is limited. For some of the benchmark problems, the choice of
acquisition function clearly affects the number of simulations needed for
successful falsification.
Related papers
- Review, Refine, Repeat: Understanding Iterative Decoding of AI Agents with Dynamic Evaluation and Selection [71.92083784393418]
Inference-time methods such as Best-of-N (BON) sampling offer a simple yet effective alternative to improve performance.
We propose Iterative Agent Decoding (IAD) which combines iterative refinement with dynamic candidate evaluation and selection guided by a verifier.
arXiv Detail & Related papers (2025-04-02T17:40:47Z) - Synergistic Development of Perovskite Memristors and Algorithms for Robust Analog Computing [53.77822620185878]
We propose a synergistic methodology to concurrently optimize perovskite memristor fabrication and develop robust analog DNNs.
We develop "BayesMulti", a training strategy utilizing BO-guided noise injection to improve the resistance of analog DNNs to memristor imperfections.
Our integrated approach enables use of analog computing in much deeper and wider networks, achieving up to 100-fold improvements.
arXiv Detail & Related papers (2024-12-03T19:20:08Z) - Autoformulation of Mathematical Optimization Models Using LLMs [50.030647274271516]
We develop an automated approach to creating optimization models from natural language descriptions for commercial solvers.
We identify the three core challenges of autoformulation: (1) defining the vast, problem-dependent hypothesis space, (2) efficiently searching this space under uncertainty, and (3) evaluating formulation correctness.
arXiv Detail & Related papers (2024-11-03T20:41:38Z) - Optimizing Falsification for Learning-Based Control Systems: A Multi-Fidelity Bayesian Approach [40.58350379106314]
falsification problem involves the identification of counterexamples that violate system safety requirements.
We propose a multi-fidelity Bayesian optimization falsification framework that harnesses simulators with varying levels of accuracy.
arXiv Detail & Related papers (2024-09-12T14:51:03Z) - Bisimulation Learning [55.859538562698496]
We compute finite bisimulations of state transition systems with large, possibly infinite state space.
Our technique yields faster verification results than alternative state-of-the-art tools in practice.
arXiv Detail & Related papers (2024-05-24T17:11:27Z) - Calibrating Bayesian Learning via Regularization, Confidence Minimization, and Selective Inference [37.82259435084825]
A well-calibrated AI model must correctly report its accuracy on in-distribution (ID) inputs, while also enabling the detection of out-of-distribution (OOD) inputs.
This paper proposes an extension of variational inference (VI)-based Bayesian learning that integrates calibration regularization for improved ID performance.
arXiv Detail & Related papers (2024-04-17T13:08:26Z) - Requirement falsification for cyber-physical systems using generative
models [1.90365714903665]
OGAN can find inputs that are counterexamples for the safety of a system revealing design, software, or hardware defects before the system is taken into operation.
OGAN executes tests atomically and does not require any previous model of the system under test.
OGAN can be applied to new systems with little effort, has few requirements for the system under test, and exhibits state-of-the-art CPS falsification efficiency and effectiveness.
arXiv Detail & Related papers (2023-10-31T14:32:54Z) - Calibrating Neural Simulation-Based Inference with Differentiable
Coverage Probability [50.44439018155837]
We propose to include a calibration term directly into the training objective of the neural model.
By introducing a relaxation of the classical formulation of calibration error we enable end-to-end backpropagation.
It is directly applicable to existing computational pipelines allowing reliable black-box posterior inference.
arXiv Detail & Related papers (2023-10-20T10:20:45Z) - Zero-Shot Sharpness-Aware Quantization for Pre-trained Language Models [88.80146574509195]
Quantization is a promising approach for reducing memory overhead and accelerating inference.
We propose a novel-aware quantization (ZSAQ) framework for the zero-shot quantization of various PLMs.
arXiv Detail & Related papers (2023-10-20T07:09:56Z) - Physics-Driven ML-Based Modelling for Correcting Inverse Estimation [6.018296524383859]
This work focuses on detecting and correcting failed state estimations before adopting them in SAE inverse problems.
We propose a novel approach, GEESE, to correct it through optimization, aiming at delivering both low error and high efficiency.
GEESE is tested on three real-world SAE inverse problems and compared to a number of state-of-the-art optimization/search approaches.
arXiv Detail & Related papers (2023-09-25T09:37:19Z) - Simulation-to-reality UAV Fault Diagnosis with Deep Learning [20.182411473467656]
We propose a deep learning model that addresses the simulation-to-reality gap in fault diagnosis of quadrotors.
Our proposed approach achieves an accuracy of 96% in detecting propeller faults.
This is the first reliable and efficient method for simulation-to-reality fault diagnosis of quadrotor propellers.
arXiv Detail & Related papers (2023-02-09T02:37:48Z) - Falsification of Learning-Based Controllers through Multi-Fidelity
Bayesian Optimization [34.71695000650056]
We propose a multi-fidelity falsification framework using Bayesian optimization.
This method allows us to automatically switch between inexpensive, inaccurate information from a low-fidelity simulator and expensive, accurate information from a high-fidelity simulator.
arXiv Detail & Related papers (2022-12-28T22:48:42Z) - Exploring validation metrics for offline model-based optimisation with
diffusion models [50.404829846182764]
In model-based optimisation (MBO) we are interested in using machine learning to design candidates that maximise some measure of reward with respect to a black box function called the (ground truth) oracle.
While an approximation to the ground oracle can be trained and used in place of it during model validation to measure the mean reward over generated candidates, the evaluation is approximate and vulnerable to adversarial examples.
This is encapsulated under our proposed evaluation framework which is also designed to measure extrapolation.
arXiv Detail & Related papers (2022-11-19T16:57:37Z) - A Stable, Fast, and Fully Automatic Learning Algorithm for Predictive
Coding Networks [65.34977803841007]
Predictive coding networks are neuroscience-inspired models with roots in both Bayesian statistics and neuroscience.
We show how by simply changing the temporal scheduling of the update rule for the synaptic weights leads to an algorithm that is much more efficient and stable than the original one.
arXiv Detail & Related papers (2022-11-16T00:11:04Z) - Validation of Composite Systems by Discrepancy Propagation [4.588222946914529]
We present a validation method that propagates bounds on distributional discrepancy measures through a composite system.
We demonstrate that our propagation method yields valid and useful bounds for composite systems exhibiting a variety of realistic effects.
arXiv Detail & Related papers (2022-10-21T15:51:54Z) - Automatic Extrinsic Calibration Method for LiDAR and Camera Sensor
Setups [68.8204255655161]
We present a method to calibrate the parameters of any pair of sensors involving LiDARs, monocular or stereo cameras.
The proposed approach can handle devices with very different resolutions and poses, as usually found in vehicle setups.
arXiv Detail & Related papers (2021-01-12T12:02:26Z) - Pre-training Is (Almost) All You Need: An Application to Commonsense
Reasoning [61.32992639292889]
Fine-tuning of pre-trained transformer models has become the standard approach for solving common NLP tasks.
We introduce a new scoring method that casts a plausibility ranking task in a full-text format.
We show that our method provides a much more stable training phase across random restarts.
arXiv Detail & Related papers (2020-04-29T10:54:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.