Poisoning Attacks on Cyber Attack Detectors for Industrial Control
Systems
- URL: http://arxiv.org/abs/2012.15740v1
- Date: Wed, 23 Dec 2020 14:11:26 GMT
- Title: Poisoning Attacks on Cyber Attack Detectors for Industrial Control
Systems
- Authors: Moshe Kravchik and Battista Biggio and Asaf Shabtai
- Abstract summary: We are first to demonstrate such poisoning attacks on ICS online neural network detectors.
We propose two distinct attack algorithms, namely, back-gradient based poisoning, and demonstrate their effectiveness on both synthetic and real-world data.
- Score: 34.86059492072526
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, neural network (NN)-based methods, including autoencoders, have
been proposed for the detection of cyber attacks targeting industrial control
systems (ICSs). Such detectors are often retrained, using data collected during
system operation, to cope with the natural evolution (i.e., concept drift) of
the monitored signals. However, by exploiting this mechanism, an attacker can
fake the signals provided by corrupted sensors at training time and poison the
learning process of the detector such that cyber attacks go undetected at test
time. With this research, we are the first to demonstrate such poisoning
attacks on ICS cyber attack online NN detectors. We propose two distinct attack
algorithms, namely, interpolation- and back-gradient based poisoning, and
demonstrate their effectiveness on both synthetic and real-world ICS data. We
also discuss and analyze some potential mitigation strategies.
Related papers
- A Variational Autoencoder Framework for Robust, Physics-Informed
Cyberattack Recognition in Industrial Cyber-Physical Systems [2.051548207330147]
We develop a data-driven framework that can be used to detect, diagnose, and localize a type of cyberattack called covert attacks on industrial control systems.
The framework has a hybrid design that combines a variational autoencoder (VAE), a recurrent neural network (RNN), and a Deep Neural Network (DNN)
arXiv Detail & Related papers (2023-10-10T19:07:53Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - A Heterogeneous Graph Learning Model for Cyber-Attack Detection [4.559898668629277]
A cyber-attack is a malicious attempt by hackers to breach the target information system.
This paper proposes an intelligent cyber-attack detection method based on provenance data.
Experiment results show that the proposed method outperforms other learning based detection models.
arXiv Detail & Related papers (2021-12-16T16:03:39Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Adversarial Attacks and Mitigation for Anomaly Detectors of
Cyber-Physical Systems [6.417955560857806]
In this work, we present an adversarial attack that simultaneously evades the anomaly detectors and rule checkers of a CPS.
Inspired by existing gradient-based approaches, our adversarial attack crafts noise over the sensor and actuator values, then uses a genetic algorithm to optimise the latter.
We implement our approach for two real-world critical infrastructure testbeds, successfully reducing the classification accuracy of their detectors by over 50% on average.
arXiv Detail & Related papers (2021-05-22T12:19:03Z) - TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack [46.79557381882643]
We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
arXiv Detail & Related papers (2021-03-10T19:03:38Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Can't Boil This Frog: Robustness of Online-Trained Autoencoder-Based
Anomaly Detectors to Adversarial Poisoning Attacks [26.09388179354751]
We present the first study focused on poisoning attacks on online-trained autoencoder-based attack detectors.
We show that the proposed algorithms can generate poison samples that cause the target attack to go undetected by the autoencoder detector.
This finding suggests that neural network-based attack detectors used in the cyber-physical domain are more robust to poisoning than in other problem domains.
arXiv Detail & Related papers (2020-02-07T12:41:28Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.