A False Sense of Security? Revisiting the State of Machine
Learning-Based Industrial Intrusion Detection
- URL: http://arxiv.org/abs/2205.09199v1
- Date: Wed, 18 May 2022 20:17:33 GMT
- Title: A False Sense of Security? Revisiting the State of Machine
Learning-Based Industrial Intrusion Detection
- Authors: Dominik Kus, Eric Wagner, Jan Pennekamp, Konrad Wolsing, Ina Berenice
Fink, Markus Dahlmanns, Klaus Wehrle, Martin Henze
- Abstract summary: Anomaly-based intrusion detection promises to detect novel or unknown attacks on industrial control systems.
Research focuses on machine learning to train them automatically, achieving detection rates upwards of 99%.
Our results highlight an ineffectiveness in detecting unknown attacks, with detection rates dropping to between 3.2% and 14.7%.
- Score: 9.924435476552702
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Anomaly-based intrusion detection promises to detect novel or unknown attacks
on industrial control systems by modeling expected system behavior and raising
corresponding alarms for any deviations.As manually creating these behavioral
models is tedious and error-prone, research focuses on machine learning to
train them automatically, achieving detection rates upwards of 99%. However,
these approaches are typically trained not only on benign traffic but also on
attacks and then evaluated against the same type of attack used for training.
Hence, their actual, real-world performance on unknown (not trained on) attacks
remains unclear. In turn, the reported near-perfect detection rates of machine
learning-based intrusion detection might create a false sense of security. To
assess this situation and clarify the real potential of machine learning-based
industrial intrusion detection, we develop an evaluation methodology and
examine multiple approaches from literature for their performance on unknown
attacks (excluded from training). Our results highlight an ineffectiveness in
detecting unknown attacks, with detection rates dropping to between 3.2% and
14.7% for some types of attacks. Moving forward, we derive recommendations for
further research on machine learning-based approaches to ensure clarity on
their ability to detect unknown attacks.
Related papers
- Unlearn and Burn: Adversarial Machine Unlearning Requests Destroy Model Accuracy [65.80757820884476]
We expose a critical yet underexplored vulnerability in the deployment of unlearning systems.
We present a threat model where an attacker can degrade model accuracy by submitting adversarial unlearning requests for data not present in the training set.
We evaluate various verification mechanisms to detect the legitimacy of unlearning requests and reveal the challenges in verification.
arXiv Detail & Related papers (2024-10-12T16:47:04Z) - Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Unsupervised Adversarial Detection without Extra Model: Training Loss
Should Change [24.76524262635603]
Traditional approaches to adversarial training and supervised detection rely on prior knowledge of attack types and access to labeled training data.
We propose new training losses to reduce useless features and the corresponding detection method without prior knowledge of adversarial attacks.
The proposed method works well in all tested attack types and the false positive rates are even better than the methods good at certain types.
arXiv Detail & Related papers (2023-08-07T01:41:21Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Detect & Reject for Transferability of Black-box Adversarial Attacks
Against Network Intrusion Detection Systems [0.0]
We investigate the transferability of adversarial network traffic against machine learning-based intrusion detection systems.
We examine Detect & Reject as a defensive mechanism to limit the effect of the transferability property of adversarial network traffic against machine learning-based intrusion detection systems.
arXiv Detail & Related papers (2021-12-22T17:54:54Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Attack Rules: An Adversarial Approach to Generate Attacks for Industrial
Control Systems using Machine Learning [7.205662414865643]
We propose an association rule mining-based attack generation technique.
The proposed technique was able to generate more than 300,000 attack patterns constituting a vast majority of new attack vectors which were not seen before.
arXiv Detail & Related papers (2021-07-11T20:20:07Z) - Adversarial Attacks and Mitigation for Anomaly Detectors of
Cyber-Physical Systems [6.417955560857806]
In this work, we present an adversarial attack that simultaneously evades the anomaly detectors and rule checkers of a CPS.
Inspired by existing gradient-based approaches, our adversarial attack crafts noise over the sensor and actuator values, then uses a genetic algorithm to optimise the latter.
We implement our approach for two real-world critical infrastructure testbeds, successfully reducing the classification accuracy of their detectors by over 50% on average.
arXiv Detail & Related papers (2021-05-22T12:19:03Z) - Launching Adversarial Attacks against Network Intrusion Detection
Systems for IoT [5.077661193116692]
Technology is shifting towards a profit-driven Internet of Things market where security is an afterthought.
Traditional defending approaches are no longer sufficient to detect both known and unknown attacks to high accuracy.
Machine learning intrusion detection systems have proven their success in identifying unknown attacks with high precision.
arXiv Detail & Related papers (2021-04-26T09:36:29Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.