Assurance Monitoring of Learning Enabled Cyber-Physical Systems Using
Inductive Conformal Prediction based on Distance Learning
- URL: http://arxiv.org/abs/2110.03120v1
- Date: Thu, 7 Oct 2021 00:21:45 GMT
- Title: Assurance Monitoring of Learning Enabled Cyber-Physical Systems Using
Inductive Conformal Prediction based on Distance Learning
- Authors: Dimitrios Boursinos and Xenofon Koutsoukos
- Abstract summary: We propose an approach for assurance monitoring of learning-enabled Cyber-Physical Systems.
In order to allow real-time assurance monitoring, the approach employs distance learning to transform high-dimensional inputs into lower size embedding representations.
We demonstrate the approach using three data sets of mobile robot following a wall, speaker recognition, and traffic sign recognition.
- Score: 2.66512000865131
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning components such as deep neural networks are used extensively
in Cyber-Physical Systems (CPS). However, such components may introduce new
types of hazards that can have disastrous consequences and need to be addressed
for engineering trustworthy systems. Although deep neural networks offer
advanced capabilities, they must be complemented by engineering methods and
practices that allow effective integration in CPS. In this paper, we proposed
an approach for assurance monitoring of learning-enabled CPS based on the
conformal prediction framework. In order to allow real-time assurance
monitoring, the approach employs distance learning to transform
high-dimensional inputs into lower size embedding representations. By
leveraging conformal prediction, the approach provides well-calibrated
confidence and ensures a bounded small error rate while limiting the number of
inputs for which an accurate prediction cannot be made. We demonstrate the
approach using three data sets of mobile robot following a wall, speaker
recognition, and traffic sign recognition. The experimental results demonstrate
that the error rates are well-calibrated while the number of alarms is very
small. Further, the method is computationally efficient and allows real-time
assurance monitoring of CPS.
Related papers
- Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Testing learning-enabled cyber-physical systems with Large-Language Models: A Formal Approach [32.15663640443728]
The integration of machine learning (ML) into cyber-physical systems (CPS) offers significant benefits.
Existing verification and validation techniques are often inadequate for these new paradigms.
We propose a roadmap to transition from foundational probabilistic testing to a more rigorous approach capable of delivering formal assurance.
arXiv Detail & Related papers (2023-11-13T14:56:14Z) - PAC-Based Formal Verification for Out-of-Distribution Data Detection [4.406331747636832]
This study places probably approximately correct (PAC) based guarantees on OOD detection using the encoding process within VAEs.
It is used to bound the detection error on unfamiliar instances with user-defined confidence.
arXiv Detail & Related papers (2023-04-04T07:33:02Z) - Certified Interpretability Robustness for Class Activation Mapping [77.58769591550225]
We present CORGI, short for Certifiably prOvable Robustness Guarantees for Interpretability mapping.
CORGI is an algorithm that takes in an input image and gives a certifiable lower bound for the robustness of its CAM interpretability map.
We show the effectiveness of CORGI via a case study on traffic sign data, certifying lower bounds on the minimum adversarial perturbation.
arXiv Detail & Related papers (2023-01-26T18:58:11Z) - Interpretable Self-Aware Neural Networks for Robust Trajectory
Prediction [50.79827516897913]
We introduce an interpretable paradigm for trajectory prediction that distributes the uncertainty among semantic concepts.
We validate our approach on real-world autonomous driving data, demonstrating superior performance over state-of-the-art baselines.
arXiv Detail & Related papers (2022-11-16T06:28:20Z) - Recursively Feasible Probabilistic Safe Online Learning with Control Barrier Functions [60.26921219698514]
We introduce a model-uncertainty-aware reformulation of CBF-based safety-critical controllers.
We then present the pointwise feasibility conditions of the resulting safety controller.
We use these conditions to devise an event-triggered online data collection strategy.
arXiv Detail & Related papers (2022-08-23T05:02:09Z) - Multi Agent System for Machine Learning Under Uncertainty in Cyber
Physical Manufacturing System [78.60415450507706]
Recent advancements in predictive machine learning has led to its application in various use cases in manufacturing.
Most research focused on maximising predictive accuracy without addressing the uncertainty associated with it.
In this paper, we determine the sources of uncertainty in machine learning and establish the success criteria of a machine learning system to function well under uncertainty.
arXiv Detail & Related papers (2021-07-28T10:28:05Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Trusted Confidence Bounds for Learning Enabled Cyber-Physical Systems [2.1320960069210484]
The paper presents an approach for computing confidence bounds based on Inductive Conformal Prediction (ICP)
We train a Triplet Network architecture to learn representations of the input data that can be used to estimate the similarity between test examples and examples in the training data set.
Then, these representations are used to estimate the confidence of set predictions from a classifier that is based on the neural network architecture used in the triplet.
arXiv Detail & Related papers (2020-03-11T04:31:10Z) - Real-time Out-of-distribution Detection in Learning-Enabled
Cyber-Physical Systems [1.4213973379473654]
Cyber-physical systems benefit by using machine learning components that can handle the uncertainty and variability of the real-world.
Deep neural networks, however, introduce new types of hazards that may impact system safety.
Out-of-distribution data may lead to a large error and compromise safety.
arXiv Detail & Related papers (2020-01-28T17:51:07Z) - Assurance Monitoring of Cyber-Physical Systems with Machine Learning
Components [2.1320960069210484]
We investigate how to use the conformal prediction framework for assurance monitoring of Cyber-Physical Systems.
In order to handle high-dimensional inputs in real-time, we compute nonconformity scores using embedding representations of the learned models.
By leveraging conformal prediction, the approach provides well-calibrated confidence and can allow monitoring that ensures a bounded small error rate.
arXiv Detail & Related papers (2020-01-14T19:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.