No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems
- URL: http://arxiv.org/abs/2012.03586v2
- Date: Mon, 26 Jun 2023 13:46:52 GMT
- Title: No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems
- Authors: Alessandro Erba, Nils Ole Tippenhauer
- Abstract summary: We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
- Score: 95.54151664013011
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, a number of process-based anomaly detection schemes for
Industrial Control Systems were proposed. In this work, we provide the first
systematic analysis of such schemes, and introduce a taxonomy of properties
that are verified by those detection systems. We then present a novel general
framework to generate adversarial spoofing signals that violate physical
properties of the system, and use the framework to analyze four anomaly
detectors published at top security conferences. We find that three of those
detectors are susceptible to a number of adversarial manipulations (e.g.,
spoofing with precomputed patterns), which we call Synthetic Sensor Spoofing
and one is resilient against our attacks. We investigate the root of its
resilience and demonstrate that it comes from the properties that we
introduced. Our attacks reduce the Recall (True Positive Rate) of the attacked
schemes making them not able to correctly detect anomalies. Thus, the
vulnerabilities we discovered in the anomaly detectors show that (despite an
original good detection performance), those detectors are not able to reliably
learn physical properties of the system. Even attacks that prior work was
expected to be resilient against (based on verified properties) were found to
be successful. We argue that our findings demonstrate the need for both more
complete attacks in datasets, and more critical analysis of process-based
anomaly detectors. We plan to release our implementation as open-source,
together with an extension of two public datasets with a set of Synthetic
Sensor Spoofing attacks as generated by our framework.
Related papers
- FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Kairos: Practical Intrusion Detection and Investigation using
Whole-system Provenance [4.101641763092759]
Provenance graphs are structured audit logs that describe the history of a system's execution.
We identify four common dimensions that drive the development of provenance-based intrusion detection systems (PIDSes)
We present KAIROS, the first PIDS that simultaneously satisfies the desiderata in all four dimensions.
arXiv Detail & Related papers (2023-08-09T16:04:55Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Towards an Awareness of Time Series Anomaly Detection Models'
Adversarial Vulnerability [21.98595908296989]
We demonstrate that the performance of state-of-the-art anomaly detection methods is degraded substantially by adding only small adversarial perturbations to the sensor data.
We use different scoring metrics such as prediction errors, anomaly, and classification scores over several public and private datasets.
We demonstrate, for the first time, the vulnerabilities of anomaly detection systems against adversarial attacks.
arXiv Detail & Related papers (2022-08-24T01:55:50Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Adversarially Robust One-class Novelty Detection [83.1570537254877]
We show that existing novelty detectors are susceptible to adversarial examples.
We propose a defense strategy that manipulates the latent space of novelty detectors to improve the robustness against adversarial examples.
arXiv Detail & Related papers (2021-08-25T10:41:29Z) - Adversarial Attacks and Mitigation for Anomaly Detectors of
Cyber-Physical Systems [6.417955560857806]
In this work, we present an adversarial attack that simultaneously evades the anomaly detectors and rule checkers of a CPS.
Inspired by existing gradient-based approaches, our adversarial attack crafts noise over the sensor and actuator values, then uses a genetic algorithm to optimise the latter.
We implement our approach for two real-world critical infrastructure testbeds, successfully reducing the classification accuracy of their detectors by over 50% on average.
arXiv Detail & Related papers (2021-05-22T12:19:03Z) - Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly
Detection [16.010654200489913]
This paper proposes a new defense against neural network backdooring attacks.
It is based on the intuition that the feature extraction layers of a backdoored network embed new features to detect the presence of a trigger.
To detect backdoors, the proposed defense uses two synergistic anomaly detectors trained on clean validation data.
arXiv Detail & Related papers (2020-11-04T20:33:51Z) - Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification [78.51092318750102]
This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
arXiv Detail & Related papers (2020-06-11T04:31:56Z) - Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based
Anomaly Detectors to Adversarial Poisoning Attacks [26.09388179354751]
We present the first study focused on poisoning attacks on online-trained autoencoder-based attack detectors.
We show that the proposed algorithms can generate poison samples that cause the target attack to go undetected by the autoencoder detector.
This finding suggests that neural network-based attack detectors used in the cyber-physical domain are more robust to poisoning than in other problem domains.
arXiv Detail & Related papers (2020-02-07T12:41:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.