Adversarial Attacks and Mitigation for Anomaly Detectors of
Cyber-Physical Systems
- URL: http://arxiv.org/abs/2105.10707v1
- Date: Sat, 22 May 2021 12:19:03 GMT
- Title: Adversarial Attacks and Mitigation for Anomaly Detectors of
Cyber-Physical Systems
- Authors: Yifan Jia, Jingyi Wang, Christopher M. Poskitt, Sudipta Chattopadhyay,
Jun Sun, Yuqi Chen
- Abstract summary: In this work, we present an adversarial attack that simultaneously evades the anomaly detectors and rule checkers of a CPS.
Inspired by existing gradient-based approaches, our adversarial attack crafts noise over the sensor and actuator values, then uses a genetic algorithm to optimise the latter.
We implement our approach for two real-world critical infrastructure testbeds, successfully reducing the classification accuracy of their detectors by over 50% on average.
- Score: 6.417955560857806
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The threats faced by cyber-physical systems (CPSs) in critical infrastructure
have motivated research into a multitude of attack detection mechanisms,
including anomaly detectors based on neural network models. The effectiveness
of anomaly detectors can be assessed by subjecting them to test suites of
attacks, but less consideration has been given to adversarial attackers that
craft noise specifically designed to deceive them. While successfully applied
in domains such as images and audio, adversarial attacks are much harder to
implement in CPSs due to the presence of other built-in defence mechanisms such
as rule checkers(or invariant checkers). In this work, we present an
adversarial attack that simultaneously evades the anomaly detectors and rule
checkers of a CPS. Inspired by existing gradient-based approaches, our
adversarial attack crafts noise over the sensor and actuator values, then uses
a genetic algorithm to optimise the latter, ensuring that the neural network
and the rule checking system are both deceived.We implemented our approach for
two real-world critical infrastructure testbeds, successfully reducing the
classification accuracy of their detectors by over 50% on average, while
simultaneously avoiding detection by rule checkers. Finally, we explore whether
these attacks can be mitigated by training the detectors on adversarial
samples.
Related papers
- AdvQDet: Detecting Query-Based Adversarial Attacks with Adversarial Contrastive Prompt Tuning [93.77763753231338]
Adversarial Contrastive Prompt Tuning (ACPT) is proposed to fine-tune the CLIP image encoder to extract similar embeddings for any two intermediate adversarial queries.
We show that ACPT can detect 7 state-of-the-art query-based attacks with $>99%$ detection rate within 5 shots.
We also show that ACPT is robust to 3 types of adaptive attacks.
arXiv Detail & Related papers (2024-08-04T09:53:50Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - Detect & Reject for Transferability of Black-box Adversarial Attacks
Against Network Intrusion Detection Systems [0.0]
We investigate the transferability of adversarial network traffic against machine learning-based intrusion detection systems.
We examine Detect & Reject as a defensive mechanism to limit the effect of the transferability property of adversarial network traffic against machine learning-based intrusion detection systems.
arXiv Detail & Related papers (2021-12-22T17:54:54Z) - Adversarially Robust One-class Novelty Detection [83.1570537254877]
We show that existing novelty detectors are susceptible to adversarial examples.
We propose a defense strategy that manipulates the latent space of novelty detectors to improve the robustness against adversarial examples.
arXiv Detail & Related papers (2021-08-25T10:41:29Z) - Poisoning Attacks on Cyber Attack Detectors for Industrial Control
Systems [34.86059492072526]
We are first to demonstrate such poisoning attacks on ICS online neural network detectors.
We propose two distinct attack algorithms, namely, back-gradient based poisoning, and demonstrate their effectiveness on both synthetic and real-world data.
arXiv Detail & Related papers (2020-12-23T14:11:26Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification [78.51092318750102]
This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
arXiv Detail & Related papers (2020-06-11T04:31:56Z) - Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based
Anomaly Detectors to Adversarial Poisoning Attacks [26.09388179354751]
We present the first study focused on poisoning attacks on online-trained autoencoder-based attack detectors.
We show that the proposed algorithms can generate poison samples that cause the target attack to go undetected by the autoencoder detector.
This finding suggests that neural network-based attack detectors used in the cyber-physical domain are more robust to poisoning than in other problem domains.
arXiv Detail & Related papers (2020-02-07T12:41:28Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.