Attack Rules: An Adversarial Approach to Generate Attacks for Industrial
Control Systems using Machine Learning
- URL: http://arxiv.org/abs/2107.05127v1
- Date: Sun, 11 Jul 2021 20:20:07 GMT
- Title: Attack Rules: An Adversarial Approach to Generate Attacks for Industrial
Control Systems using Machine Learning
- Authors: Muhammad Azmi Umer, Chuadhry Mujeeb Ahmed, Muhammad Taha Jilani,
Aditya P. Mathur
- Abstract summary: We propose an association rule mining-based attack generation technique.
The proposed technique was able to generate more than 300,000 attack patterns constituting a vast majority of new attack vectors which were not seen before.
- Score: 7.205662414865643
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Adversarial learning is used to test the robustness of machine learning
algorithms under attack and create attacks that deceive the anomaly detection
methods in Industrial Control System (ICS). Given that security assessment of
an ICS demands that an exhaustive set of possible attack patterns is studied,
in this work, we propose an association rule mining-based attack generation
technique. The technique has been implemented using data from a secure Water
Treatment plant. The proposed technique was able to generate more than 300,000
attack patterns constituting a vast majority of new attack vectors which were
not seen before. Automatically generated attacks improve our understanding of
the potential attacks and enable the design of robust attack detection
techniques.
Related papers
- FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - usfAD Based Effective Unknown Attack Detection Focused IDS Framework [3.560574387648533]
Internet of Things (IoT) and Industrial Internet of Things (IIoT) have led to an increasing range of cyber threats.
For more than a decade, researchers have delved into supervised machine learning techniques to develop Intrusion Detection System (IDS)
IDS trained and tested on known datasets fails in detecting zero-day or unknown attacks.
We propose two strategies for semi-supervised learning based IDS where training samples of attacks are not required.
arXiv Detail & Related papers (2024-03-17T11:49:57Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - A Human-in-the-Middle Attack against Object Detection Systems [4.764637544913963]
We propose a novel hardware attack inspired by Man-in-the-Middle attacks in cryptography.
This attack generates a Universal Adversarial Perturbations (UAP) and injects the perturbation between the USB camera and the detection system.
These findings raise serious concerns for applications of deep learning models in safety-critical systems, such as autonomous driving.
arXiv Detail & Related papers (2022-08-15T13:21:41Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - A False Sense of Security? Revisiting the State of Machine
Learning-Based Industrial Intrusion Detection [9.924435476552702]
Anomaly-based intrusion detection promises to detect novel or unknown attacks on industrial control systems.
Research focuses on machine learning to train them automatically, achieving detection rates upwards of 99%.
Our results highlight an ineffectiveness in detecting unknown attacks, with detection rates dropping to between 3.2% and 14.7%.
arXiv Detail & Related papers (2022-05-18T20:17:33Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Launching Adversarial Attacks against Network Intrusion Detection
Systems for IoT [5.077661193116692]
Technology is shifting towards a profit-driven Internet of Things market where security is an afterthought.
Traditional defending approaches are no longer sufficient to detect both known and unknown attacks to high accuracy.
Machine learning intrusion detection systems have proven their success in identifying unknown attacks with high precision.
arXiv Detail & Related papers (2021-04-26T09:36:29Z) - Adversarial defense for automatic speaker verification by cascaded
self-supervised learning models [101.42920161993455]
More and more malicious attackers attempt to launch adversarial attacks at automatic speaker verification (ASV) systems.
We propose a standard and attack-agnostic method based on cascaded self-supervised learning models to purify the adversarial perturbations.
Experimental results demonstrate that the proposed method achieves effective defense performance and can successfully counter adversarial attacks.
arXiv Detail & Related papers (2021-02-14T01:56:43Z) - Adversarial Attacks and Detection on Reinforcement Learning-Based
Interactive Recommender Systems [47.70973322193384]
Adversarial attacks pose significant challenges for detecting them at an early stage.
We propose attack-agnostic detection on reinforcement learning-based interactive recommendation systems.
We first craft adversarial examples to show their diverse distributions and then augment recommendation systems by detecting potential attacks.
arXiv Detail & Related papers (2020-06-14T15:41:47Z) - Adversarial Attacks on Machine Learning Cybersecurity Defences in
Industrial Control Systems [2.86989372262348]
This paper explores how adversarial learning can be used to target supervised models by generating adversarial samples.
It also explores how such samples can support the robustness of supervised models using adversarial training.
Overall, the classification performance of two widely used classifiers, Random Forest and J48, decreased by 16 and 20 percentage points when adversarial samples were present.
arXiv Detail & Related papers (2020-04-10T12:05:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.