Adversarial Explainability: Utilizing Explainable Machine Learning in Bypassing IoT Botnet Detection Systems
- URL: http://arxiv.org/abs/2310.00070v1
- Date: Fri, 29 Sep 2023 18:20:05 GMT
- Title: Adversarial Explainability: Utilizing Explainable Machine Learning in Bypassing IoT Botnet Detection Systems
- Authors: Mohammed M. Alani, Atefeh Mashatan, Ali Miri,
- Abstract summary: Botnet detection based on machine learning has witnessed significant leaps in recent years.
adversarial attacks on machine learning-based cybersecurity systems are posing a significant threat to these solutions.
In this paper, we introduce a novel attack that utilizes machine learning model's explainability in evading detection by botnet detection systems.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Botnet detection based on machine learning have witnessed significant leaps in recent years, with the availability of large and reliable datasets that are extracted from real-life scenarios. Consequently, adversarial attacks on machine learning-based cybersecurity systems are posing a significant threat to the practicality of these solutions. In this paper, we introduce a novel attack that utilizes machine learning model's explainability in evading detection by botnet detection systems. The proposed attack utilizes information obtained from model's explainability to build adversarial samples that can evade detection in a blackbox setting. The proposed attack was tested on a trained IoT botnet detection systems and was capable of bypassing the botnet detection with 0% detection by altering one feature only to generate the adversarial samples.
Related papers
- Time-Aware Face Anti-Spoofing with Rotation Invariant Local Binary Patterns and Deep Learning [50.79277723970418]
imitation attacks can lead to erroneous identification and subsequent authentication of attackers.
Similar to face recognition, imitation attacks can also be detected with Machine Learning.
We propose a novel approach that promises high classification accuracy by combining previously unused features with time-aware deep learning strategies.
arXiv Detail & Related papers (2024-08-27T07:26:10Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - Detect & Reject for Transferability of Black-box Adversarial Attacks
Against Network Intrusion Detection Systems [0.0]
We investigate the transferability of adversarial network traffic against machine learning-based intrusion detection systems.
We examine Detect & Reject as a defensive mechanism to limit the effect of the transferability property of adversarial network traffic against machine learning-based intrusion detection systems.
arXiv Detail & Related papers (2021-12-22T17:54:54Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - CAN-LOC: Spoofing Detection and Physical Intrusion Localization on an
In-Vehicle CAN Bus Based on Deep Features of Voltage Signals [48.813942331065206]
We propose a security hardening system for in-vehicle networks.
The proposed system includes two mechanisms that process deep features extracted from voltage signals measured on the CAN bus.
arXiv Detail & Related papers (2021-06-15T06:12:33Z) - Launching Adversarial Attacks against Network Intrusion Detection
Systems for IoT [5.077661193116692]
Technology is shifting towards a profit-driven Internet of Things market where security is an afterthought.
Traditional defending approaches are no longer sufficient to detect both known and unknown attacks to high accuracy.
Machine learning intrusion detection systems have proven their success in identifying unknown attacks with high precision.
arXiv Detail & Related papers (2021-04-26T09:36:29Z) - No Need to Know Physics: Resilience of Process-based Model-free Anomaly
Detection for Industrial Control Systems [95.54151664013011]
We present a novel framework to generate adversarial spoofing signals that violate physical properties of the system.
We analyze four anomaly detectors published at top security conferences.
arXiv Detail & Related papers (2020-12-07T11:02:44Z) - Botnet Detection Using Recurrent Variational Autoencoder [4.486436314247216]
Botnets are increasingly used by malicious actors, creating increasing threat to a large number of internet users.
We propose a novel machine learning based method, named Recurrent Variational Autoencoder (RVAE), for detecting botnets.
Tests show RVAE is able to detect botnets with the same accuracy as the best known results published in literature.
arXiv Detail & Related papers (2020-04-01T05:03:34Z) - Automating Botnet Detection with Graph Neural Networks [106.24877728212546]
Botnets are now a major source for many network attacks, such as DDoS attacks and spam.
In this paper, we consider the neural network design challenges of using modern deep learning techniques to learn policies for botnet detection automatically.
arXiv Detail & Related papers (2020-03-13T15:34:33Z) - NAttack! Adversarial Attacks to bypass a GAN based classifier trained to
detect Network intrusion [0.3007949058551534]
Before the rise of machine learning, network anomalies which could imply an attack, were detected using well-crafted rules.
With the advancements of machine learning for network anomaly, it is not easy for a human to understand how to bypass a cyber-defence system.
In this paper, we show that even if we build a classifier and train it with adversarial examples for network data, we can use adversarial attacks and successfully break the system.
arXiv Detail & Related papers (2020-02-20T01:54:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.