Robustness Testing of Data and Knowledge Driven Anomaly Detection in
Cyber-Physical Systems
- URL: http://arxiv.org/abs/2204.09183v1
- Date: Wed, 20 Apr 2022 02:02:56 GMT
- Title: Robustness Testing of Data and Knowledge Driven Anomaly Detection in
Cyber-Physical Systems
- Authors: Xugui Zhou, Maxfield Kouzel, Homa Alemzadeh
- Abstract summary: This paper presents preliminary results on evaluating the robustness of ML-based anomaly detection methods in safety-critical CPS.
We test the hypothesis of whether integrating the domain knowledge (e.g., on unsafe system behavior) with the ML models can improve the robustness of anomaly detection without sacrificing accuracy and transparency.
- Score: 2.088376060651494
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growing complexity of Cyber-Physical Systems (CPS) and challenges in
ensuring safety and security have led to the increasing use of deep learning
methods for accurate and scalable anomaly detection. However, machine learning
(ML) models often suffer from low performance in predicting unexpected data and
are vulnerable to accidental or malicious perturbations. Although robustness
testing of deep learning models has been extensively explored in applications
such as image classification and speech recognition, less attention has been
paid to ML-driven safety monitoring in CPS. This paper presents the preliminary
results on evaluating the robustness of ML-based anomaly detection methods in
safety-critical CPS against two types of accidental and malicious input
perturbations, generated using a Gaussian-based noise model and the Fast
Gradient Sign Method (FGSM). We test the hypothesis of whether integrating the
domain knowledge (e.g., on unsafe system behavior) with the ML models can
improve the robustness of anomaly detection without sacrificing accuracy and
transparency. Experimental results with two case studies of Artificial Pancreas
Systems (APS) for diabetes management show that ML-based safety monitors
trained with domain knowledge can reduce on average up to 54.2% of robustness
error and keep the average F1 scores high while improving transparency.
Related papers
- Bayesian Learned Models Can Detect Adversarial Malware For Free [28.498994871579985]
Adversarial training is an effective method but is computationally expensive to scale up to large datasets.
In particular, a Bayesian formulation can capture the model parameters' distribution and quantify uncertainty without sacrificing model performance.
We found, quantifying uncertainty through Bayesian learning methods can defend against adversarial malware.
arXiv Detail & Related papers (2024-03-27T07:16:48Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Data-driven Semi-supervised Machine Learning with Surrogate Measures of Safety for Abnormal Driving Behavior Detection [6.972018255192681]
This study analyzes large-scale real-world data revealing several abnormal driving behaviors.
It develops a semi-supervised machine learning (ML) method using partly labeled data to accurately detect the identified abnormal driving behaviors.
arXiv Detail & Related papers (2023-12-07T16:16:09Z) - Ensemble models outperform single model uncertainties and predictions
for operator-learning of hypersonic flows [43.148818844265236]
Training scientific machine learning (SciML) models on limited high-fidelity data offers one approach to rapidly predict behaviors for situations that have not been seen before.
High-fidelity data is itself in limited quantity to validate all outputs of the SciML model in unexplored input space.
We extend a DeepONet using three different uncertainty mechanisms: mean-variance estimation, evidential uncertainty, and ensembling.
arXiv Detail & Related papers (2023-10-31T18:07:29Z) - Performance evaluation of Machine learning algorithms for Intrusion Detection System [0.40964539027092917]
This paper focuses on intrusion detection systems (IDSs) analysis using Machine Learning (ML) techniques.
We analyze the KDD CUP-'99' intrusion detection dataset used for training and validating ML models.
arXiv Detail & Related papers (2023-10-01T06:35:37Z) - How adversarial attacks can disrupt seemingly stable accurate classifiers [76.95145661711514]
Adversarial attacks dramatically change the output of an otherwise accurate learning system using a seemingly inconsequential modification to a piece of input data.
Here, we show that this may be seen as a fundamental feature of classifiers working with high dimensional input data.
We introduce a simple generic and generalisable framework for which key behaviours observed in practical systems arise with high probability.
arXiv Detail & Related papers (2023-09-07T12:02:00Z) - Benchmarking Machine Learning Robustness in Covid-19 Genome Sequence
Classification [109.81283748940696]
We introduce several ways to perturb SARS-CoV-2 genome sequences to mimic the error profiles of common sequencing platforms such as Illumina and PacBio.
We show that some simulation-based approaches are more robust (and accurate) than others for specific embedding methods to certain adversarial attacks to the input sequences.
arXiv Detail & Related papers (2022-07-18T19:16:56Z) - Anomaly Detection in Cybersecurity: Unsupervised, Graph-Based and
Supervised Learning Methods in Adversarial Environments [63.942632088208505]
Inherent to today's operating environment is the practice of adversarial machine learning.
In this work, we examine the feasibility of unsupervised learning and graph-based methods for anomaly detection.
We incorporate a realistic adversarial training mechanism when training our supervised models to enable strong classification performance in adversarial environments.
arXiv Detail & Related papers (2021-05-14T10:05:10Z) - Data-driven Design of Context-aware Monitors for Hazard Prediction in
Artificial Pancreas Systems [2.126171264016785]
Medical Cyber-physical Systems (MCPS) are vulnerable to accidental or malicious faults that can target their controllers and cause safety hazards and harm to patients.
This paper proposes a combined model and data-driven approach for designing context-aware monitors that can detect early signs of hazards and mitigate them.
arXiv Detail & Related papers (2021-04-06T14:36:33Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Assurance Monitoring of Cyber-Physical Systems with Machine Learning
Components [2.1320960069210484]
We investigate how to use the conformal prediction framework for assurance monitoring of Cyber-Physical Systems.
In order to handle high-dimensional inputs in real-time, we compute nonconformity scores using embedding representations of the learned models.
By leveraging conformal prediction, the approach provides well-calibrated confidence and can allow monitoring that ensures a bounded small error rate.
arXiv Detail & Related papers (2020-01-14T19:34:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.