Discovering Decision Manifolds to Assure Trusted Autonomous Systems
- URL: http://arxiv.org/abs/2402.07791v2
- Date: Mon, 26 Feb 2024 21:43:33 GMT
- Title: Discovering Decision Manifolds to Assure Trusted Autonomous Systems
- Authors: Matthew Litton, Doron Drusinsky, and James Bret Michael
- Abstract summary: We propose an optimization-based search technique for capturing the range of correct and incorrect responses a system could exhibit.
This manifold provides a more detailed understanding of system reliability than traditional testing or Monte Carlo simulations.
In this proof-of-concept, we apply our method to a software-in-the-loop evaluation of an autonomous vehicle.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Developing and fielding complex systems requires proof that they are reliably
correct with respect to their design and operating requirements. Especially for
autonomous systems which exhibit unanticipated emergent behavior, fully
enumerating the range of possible correct and incorrect behaviors is
intractable. Therefore, we propose an optimization-based search technique for
generating high-quality, high-variance, and non-trivial data which captures the
range of correct and incorrect responses a system could exhibit. This manifold
between desired and undesired behavior provides a more detailed understanding
of system reliability than traditional testing or Monte Carlo simulations.
After discovering data points along the manifold, we apply machine learning
techniques to quantify the decision manifold's underlying mathematical
function. Such models serve as correctness properties which can be utilized to
enable both verification during development and testing, as well as continuous
assurance during operation, even amidst system adaptations and dynamic
operating environments. This method can be applied in combination with a
simulator in order to provide evidence of dependability to system designers and
users, with the ultimate aim of establishing trust in the deployment of complex
systems. In this proof-of-concept, we apply our method to a
software-in-the-loop evaluation of an autonomous vehicle.
Related papers
- Data-Driven Distributionally Robust Safety Verification Using Barrier Certificates and Conditional Mean Embeddings [0.24578723416255752]
We develop scalable formal verification algorithms without shifting the problem to unrealistic assumptions.
In a pursuit of developing scalable formal verification algorithms without shifting the problem to unrealistic assumptions, we employ the concept of barrier certificates.
We show how to solve the resulting program efficiently using sum-of-squares optimization and a Gaussian process envelope.
arXiv Detail & Related papers (2024-03-15T17:32:02Z) - Towards Scenario-based Safety Validation for Autonomous Trains with Deep
Generative Models [0.0]
We report our practical experiences regarding the utility of data simulation with deep generative models for scenario-based validation.
We demonstrate the capabilities of semantically editing railway scenes with deep generative models to make a limited amount of test data more representative.
arXiv Detail & Related papers (2023-10-16T17:55:14Z) - Interactive System-wise Anomaly Detection [66.3766756452743]
Anomaly detection plays a fundamental role in various applications.
It is challenging for existing methods to handle the scenarios where the instances are systems whose characteristics are not readily observed as data.
We develop an end-to-end approach which includes an encoder-decoder module that learns system embeddings.
arXiv Detail & Related papers (2023-04-21T02:20:24Z) - Validation of Composite Systems by Discrepancy Propagation [4.588222946914529]
We present a validation method that propagates bounds on distributional discrepancy measures through a composite system.
We demonstrate that our propagation method yields valid and useful bounds for composite systems exhibiting a variety of realistic effects.
arXiv Detail & Related papers (2022-10-21T15:51:54Z) - HyperImpute: Generalized Iterative Imputation with Automatic Model
Selection [77.86861638371926]
We propose a generalized iterative imputation framework for adaptively and automatically configuring column-wise models.
We provide a concrete implementation with out-of-the-box learners, simulators, and interfaces.
arXiv Detail & Related papers (2022-06-15T19:10:35Z) - Learning Robust Output Control Barrier Functions from Safe Expert Demonstrations [50.37808220291108]
This paper addresses learning safe output feedback control laws from partial observations of expert demonstrations.
We first propose robust output control barrier functions (ROCBFs) as a means to guarantee safety.
We then formulate an optimization problem to learn ROCBFs from expert demonstrations that exhibit safe system behavior.
arXiv Detail & Related papers (2021-11-18T23:21:00Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Probabilistic robust linear quadratic regulators with Gaussian processes [73.0364959221845]
Probabilistic models such as Gaussian processes (GPs) are powerful tools to learn unknown dynamical systems from data for subsequent use in control design.
We present a novel controller synthesis for linearized GP dynamics that yields robust controllers with respect to a probabilistic stability margin.
arXiv Detail & Related papers (2021-05-17T08:36:18Z) - Using Data Assimilation to Train a Hybrid Forecast System that Combines
Machine-Learning and Knowledge-Based Components [52.77024349608834]
We consider the problem of data-assisted forecasting of chaotic dynamical systems when the available data is noisy partial measurements.
We show that by using partial measurements of the state of the dynamical system, we can train a machine learning model to improve predictions made by an imperfect knowledge-based model.
arXiv Detail & Related papers (2021-02-15T19:56:48Z) - Manifold for Machine Learning Assurance [9.594432031144716]
We propose an analogous approach for machine-learning (ML) systems using an ML technique that extracts from the high-dimensional training data implicitly describing the required system.
It is then harnessed for a range of quality assurance tasks such as test adequacy measurement, test input generation, and runtime monitoring of the target ML system.
Preliminary experiments establish that the proposed manifold-based approach, for test adequacy drives diversity in test data, for test generation yields fault-revealing yet realistic test cases, and for runtime monitoring provides an independent means to assess trustability of the target system's output.
arXiv Detail & Related papers (2020-02-08T11:39:01Z) - Counter-example Guided Learning of Bounds on Environment Behavior [11.357397596759172]
We present a data-driven solution that allows for a system to be evaluated for specification conformance without an accurate model of the environment.
Our approach involves learning a conservative reactive bound of the environment's behavior using data and specification of the system's desired behavior.
arXiv Detail & Related papers (2020-01-20T19:58:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.