Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based
Anomaly Detectors to Adversarial Poisoning Attacks
- URL: http://arxiv.org/abs/2002.02741v1
- Date: Fri, 7 Feb 2020 12:41:28 GMT
- Title: Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based
Anomaly Detectors to Adversarial Poisoning Attacks
- Authors: Moshe Kravchik, Asaf Shabtai
- Abstract summary: We present the first study focused on poisoning attacks on online-trained autoencoder-based attack detectors.
We show that the proposed algorithms can generate poison samples that cause the target attack to go undetected by the autoencoder detector.
This finding suggests that neural network-based attack detectors used in the cyber-physical domain are more robust to poisoning than in other problem domains.
- Score: 26.09388179354751
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, a variety of effective neural network-based methods for
anomaly and cyber attack detection in industrial control systems (ICSs) have
been demonstrated in the literature. Given their successful implementation and
widespread use, there is a need to study adversarial attacks on such detection
methods to better protect the systems that depend upon them. The extensive
research performed on adversarial attacks on image and malware classification
has little relevance to the physical system state prediction domain, which most
of the ICS attack detection systems belong to. Moreover, such detection systems
are typically retrained using new data collected from the monitored system,
thus the threat of adversarial data poisoning is significant, however this
threat has not yet been addressed by the research community. In this paper, we
present the first study focused on poisoning attacks on online-trained
autoencoder-based attack detectors. We propose two algorithms for generating
poison samples, an interpolation-based algorithm and a back-gradient
optimization-based algorithm, which we evaluate on both synthetic and
real-world ICS data. We demonstrate that the proposed algorithms can generate
poison samples that cause the target attack to go undetected by the autoencoder
detector, however the ability to poison the detector is limited to a small set
of attack types and magnitudes. When the poison-generating algorithms are
applied to the popular SWaT dataset, we show that the autoencoder detector
trained on the physical system state data is resilient to poisoning in the face
of all ten of the relevant attacks in the dataset. This finding suggests that
neural network-based attack detectors used in the cyber-physical domain are
more robust to poisoning than in other problem domains, such as malware
detection and image processing.
Related papers
- Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - AntidoteRT: Run-time Detection and Correction of Poison Attacks on
Neural Networks [18.461079157949698]
backdoor poisoning attacks against image classification networks.
We propose lightweight automated detection and correction techniques against poisoning attacks.
Our technique outperforms existing defenses such as NeuralCleanse and STRIP on popular benchmarks.
arXiv Detail & Related papers (2022-01-31T23:42:32Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - A Heterogeneous Graph Learning Model for Cyber-Attack Detection [4.559898668629277]
A cyber-attack is a malicious attempt by hackers to breach the target information system.
This paper proposes an intelligent cyber-attack detection method based on provenance data.
Experiment results show that the proposed method outperforms other learning based detection models.
arXiv Detail & Related papers (2021-12-16T16:03:39Z) - Spotting adversarial samples for speaker verification by neural vocoders [102.1486475058963]
We adopt neural vocoders to spot adversarial samples for automatic speaker verification (ASV)
We find that the difference between the ASV scores for the original and re-synthesize audio is a good indicator for discrimination between genuine and adversarial samples.
Our codes will be made open-source for future works to do comparison.
arXiv Detail & Related papers (2021-07-01T08:58:16Z) - Adversarial Attacks and Mitigation for Anomaly Detectors of
Cyber-Physical Systems [6.417955560857806]
In this work, we present an adversarial attack that simultaneously evades the anomaly detectors and rule checkers of a CPS.
Inspired by existing gradient-based approaches, our adversarial attack crafts noise over the sensor and actuator values, then uses a genetic algorithm to optimise the latter.
We implement our approach for two real-world critical infrastructure testbeds, successfully reducing the classification accuracy of their detectors by over 50% on average.
arXiv Detail & Related papers (2021-05-22T12:19:03Z) - Poisoning Attacks on Cyber Attack Detectors for Industrial Control
Systems [34.86059492072526]
We are first to demonstrate such poisoning attacks on ICS online neural network detectors.
We propose two distinct attack algorithms, namely, back-gradient based poisoning, and demonstrate their effectiveness on both synthetic and real-world data.
arXiv Detail & Related papers (2020-12-23T14:11:26Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Witches' Brew: Industrial Scale Data Poisoning via Gradient Matching [56.280018325419896]
Data Poisoning attacks modify training data to maliciously control a model trained on such data.
We analyze a particularly malicious poisoning attack that is both "from scratch" and "clean label"
We show that it is the first poisoning method to cause targeted misclassification in modern deep networks trained from scratch on a full-sized, poisoned ImageNet dataset.
arXiv Detail & Related papers (2020-09-04T16:17:54Z) - Investigating Robustness of Adversarial Samples Detection for Automatic
Speaker Verification [78.51092318750102]
This work proposes to defend ASV systems against adversarial attacks with a separate detection network.
A VGG-like binary classification detector is introduced and demonstrated to be effective on detecting adversarial samples.
arXiv Detail & Related papers (2020-06-11T04:31:56Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.