Launching Adversarial Attacks against Network Intrusion Detection
Systems for IoT
- URL: http://arxiv.org/abs/2104.12426v1
- Date: Mon, 26 Apr 2021 09:36:29 GMT
- Title: Launching Adversarial Attacks against Network Intrusion Detection
Systems for IoT
- Authors: Pavlos Papadopoulos, Oliver Thornewill von Essen, Nikolaos Pitropakis,
Christos Chrysoulas, Alexios Mylonas, William J. Buchanan
- Abstract summary: Technology is shifting towards a profit-driven Internet of Things market where security is an afterthought.
Traditional defending approaches are no longer sufficient to detect both known and unknown attacks to high accuracy.
Machine learning intrusion detection systems have proven their success in identifying unknown attacks with high precision.
- Score: 5.077661193116692
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As the internet continues to be populated with new devices and emerging
technologies, the attack surface grows exponentially. Technology is shifting
towards a profit-driven Internet of Things market where security is an
afterthought. Traditional defending approaches are no longer sufficient to
detect both known and unknown attacks to high accuracy. Machine learning
intrusion detection systems have proven their success in identifying unknown
attacks with high precision. Nevertheless, machine learning models are also
vulnerable to attacks. Adversarial examples can be used to evaluate the
robustness of a designed model before it is deployed. Further, using
adversarial examples is critical to creating a robust model designed for an
adversarial environment. Our work evaluates both traditional machine learning
and deep learning models' robustness using the Bot-IoT dataset. Our methodology
included two main approaches. First, label poisoning, used to cause incorrect
classification by the model. Second, the fast gradient sign method, used to
evade detection measures. The experiments demonstrated that an attacker could
manipulate or circumvent detection with significant probability.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - usfAD Based Effective Unknown Attack Detection Focused IDS Framework [3.560574387648533]
Internet of Things (IoT) and Industrial Internet of Things (IIoT) have led to an increasing range of cyber threats.
For more than a decade, researchers have delved into supervised machine learning techniques to develop Intrusion Detection System (IDS)
IDS trained and tested on known datasets fails in detecting zero-day or unknown attacks.
We propose two strategies for semi-supervised learning based IDS where training samples of attacks are not required.
arXiv Detail & Related papers (2024-03-17T11:49:57Z) - A Human-in-the-Middle Attack against Object Detection Systems [4.764637544913963]
We propose a novel hardware attack inspired by Man-in-the-Middle attacks in cryptography.
This attack generates a Universal Adversarial Perturbations (UAP) and injects the perturbation between the USB camera and the detection system.
These findings raise serious concerns for applications of deep learning models in safety-critical systems, such as autonomous driving.
arXiv Detail & Related papers (2022-08-15T13:21:41Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - A False Sense of Security? Revisiting the State of Machine
Learning-Based Industrial Intrusion Detection [9.924435476552702]
Anomaly-based intrusion detection promises to detect novel or unknown attacks on industrial control systems.
Research focuses on machine learning to train them automatically, achieving detection rates upwards of 99%.
Our results highlight an ineffectiveness in detecting unknown attacks, with detection rates dropping to between 3.2% and 14.7%.
arXiv Detail & Related papers (2022-05-18T20:17:33Z) - Detect & Reject for Transferability of Black-box Adversarial Attacks
Against Network Intrusion Detection Systems [0.0]
We investigate the transferability of adversarial network traffic against machine learning-based intrusion detection systems.
We examine Detect & Reject as a defensive mechanism to limit the effect of the transferability property of adversarial network traffic against machine learning-based intrusion detection systems.
arXiv Detail & Related papers (2021-12-22T17:54:54Z) - Adversarial Robustness of Deep Reinforcement Learning based Dynamic
Recommender Systems [50.758281304737444]
We propose to explore adversarial examples and attack detection on reinforcement learning-based interactive recommendation systems.
We first craft different types of adversarial examples by adding perturbations to the input and intervening on the casual factors.
Then, we augment recommendation systems by detecting potential attacks with a deep learning-based classifier based on the crafted data.
arXiv Detail & Related papers (2021-12-02T04:12:24Z) - Attack Rules: An Adversarial Approach to Generate Attacks for Industrial
Control Systems using Machine Learning [7.205662414865643]
We propose an association rule mining-based attack generation technique.
The proposed technique was able to generate more than 300,000 attack patterns constituting a vast majority of new attack vectors which were not seen before.
arXiv Detail & Related papers (2021-07-11T20:20:07Z) - Adversarial Attacks on Machine Learning Systems for High-Frequency
Trading [55.30403936506338]
We study valuation models for algorithmic trading from the perspective of adversarial machine learning.
We introduce new attacks specific to this domain with size constraints that minimize attack costs.
We discuss how these attacks can be used as an analysis tool to study and evaluate the robustness properties of financial models.
arXiv Detail & Related papers (2020-02-21T22:04:35Z) - Adversarial vs behavioural-based defensive AI with joint, continual and
active learning: automated evaluation of robustness to deception, poisoning
and concept drift [62.997667081978825]
Recent advancements in Artificial Intelligence (AI) have brought new capabilities to behavioural analysis (UEBA) for cyber-security.
In this paper, we present a solution to effectively mitigate this attack by improving the detection process and efficiently leveraging human expertise.
arXiv Detail & Related papers (2020-01-13T13:54:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.