Learning test generators for cyber-physical systems
- URL: http://arxiv.org/abs/2410.03202v1
- Date: Fri, 4 Oct 2024 07:34:02 GMT
- Title: Learning test generators for cyber-physical systems
- Authors: Jarkko Peltomäki, Ivan Porres,
- Abstract summary: Black-box runtime verification methods for cyber-physical systems can be used to discover errors in systems whose inputs and outputs are expressed as signals over time.
Existing methods, such as requirement falsification, often focus on finding a single input that is a counterexample to system correctness.
We show how to create test generators that can produce multiple and diverse counterexamples for a single requirement.
- Score: 2.4171019220503402
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Black-box runtime verification methods for cyber-physical systems can be used to discover errors in systems whose inputs and outputs are expressed as signals over time and their correctness requirements are specified in a temporal logic. Existing methods, such as requirement falsification, often focus on finding a single input that is a counterexample to system correctness. In this paper, we study how to create test generators that can produce multiple and diverse counterexamples for a single requirement. Several counterexamples expose system failures in varying input conditions and support the root cause analysis of the faults. We present the WOGAN algorithm to create such test generators automatically. The algorithm works by training iteratively a Wasserstein generative adversarial network that models the target distribution of the uniform distribution on the set of counterexamples. WOGAN is an algorithm that trains generative models that act as test generators for runtime verification. The training is performed online without the need for a previous model or dataset. We also propose criteria to evaluate such test generators. We evaluate the trained generators on several well-known problems including the ARCH-COMP falsification benchmarks. Our experimental results indicate that generators trained by the WOGAN algorithm are as effective as state-of-the-art requirement falsification algorithms while producing tests that are as diverse as a sample from uniform random sampling. We conclude that WOGAN is a viable method to produce test generators automatically and that these test generators can generate multiple and diverse counterexamples for the runtime verification of cyber-physical systems.
Related papers
- GE-AdvGAN: Improving the transferability of adversarial samples by
gradient editing-based adversarial generative model [69.71629949747884]
Adversarial generative models, such as Generative Adversarial Networks (GANs), are widely applied for generating various types of data.
In this work, we propose a novel algorithm named GE-AdvGAN to enhance the transferability of adversarial samples.
arXiv Detail & Related papers (2024-01-11T16:43:16Z) - Requirement falsification for cyber-physical systems using generative
models [1.90365714903665]
OGAN can find inputs that are counterexamples for the safety of a system revealing design, software, or hardware defects before the system is taken into operation.
OGAN executes tests atomically and does not require any previous model of the system under test.
OGAN can be applied to new systems with little effort, has few requirements for the system under test, and exhibits state-of-the-art CPS falsification efficiency and effectiveness.
arXiv Detail & Related papers (2023-10-31T14:32:54Z) - Wasserstein Generative Adversarial Networks for Online Test Generation
for Cyber Physical Systems [0.0]
We propose a novel online test generation algorithm WOGAN based on Wasserstein Generative Adversarial Networks.
WOGAN is a general-purpose black-box test generator applicable to any system under test having a fitness function for determining failing tests.
arXiv Detail & Related papers (2022-05-23T05:58:28Z) - Learn then Test: Calibrating Predictive Algorithms to Achieve Risk
Control [67.52000805944924]
Learn then Test (LTT) is a framework for calibrating machine learning models.
Our main insight is to reframe the risk-control problem as multiple hypothesis testing.
We use our framework to provide new calibration methods for several core machine learning tasks with detailed worked examples in computer vision.
arXiv Detail & Related papers (2021-10-03T17:42:03Z) - Online GANs for Automatic Performance Testing [0.10312968200748115]
We present a novel algorithm for automatic performance testing that uses an online variant of the Generative Adversarial Network (GAN)
The proposed approach does not require a prior training set or model of the system under test.
We consider that the presented algorithm serves as a proof of concept and we hope that it can spark a research discussion on the application of GANs to test generation.
arXiv Detail & Related papers (2021-04-21T06:03:27Z) - A Novel Anomaly Detection Algorithm for Hybrid Production Systems based
on Deep Learning and Timed Automata [73.38551379469533]
DAD:DeepAnomalyDetection is a new approach for automatic model learning and anomaly detection in hybrid production systems.
It combines deep learning and timed automata for creating behavioral model from observations.
The algorithm has been applied to few data sets including two from real systems and has shown promising results.
arXiv Detail & Related papers (2020-10-29T08:27:43Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Unsupervised Controllable Generation with Self-Training [90.04287577605723]
controllable generation with GANs remains a challenging research problem.
We propose an unsupervised framework to learn a distribution of latent codes that control the generator through self-training.
Our framework exhibits better disentanglement compared to other variants such as the variational autoencoder.
arXiv Detail & Related papers (2020-07-17T21:50:35Z) - G2D: Generate to Detect Anomaly [10.977404378308817]
We learn two deep neural networks (generator and discriminator) in a GAN-style setting on merely the normal samples.
In the training phase, when the generator fails to produce normal data, it can be considered as an irregularity generator.
We train a binary classifier on the generated anomalous samples along with the normal instances in order to be capable of detecting irregularities.
arXiv Detail & Related papers (2020-06-20T18:02:50Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.