NAttack! Adversarial Attacks to bypass a GAN based classifier trained to
detect Network intrusion
- URL: http://arxiv.org/abs/2002.08527v1
- Date: Thu, 20 Feb 2020 01:54:45 GMT
- Title: NAttack! Adversarial Attacks to bypass a GAN based classifier trained to
detect Network intrusion
- Authors: Aritran Piplai, Sai Sree Laya Chukkapalli, Anupam Joshi
- Abstract summary: Before the rise of machine learning, network anomalies which could imply an attack, were detected using well-crafted rules.
With the advancements of machine learning for network anomaly, it is not easy for a human to understand how to bypass a cyber-defence system.
In this paper, we show that even if we build a classifier and train it with adversarial examples for network data, we can use adversarial attacks and successfully break the system.
- Score: 0.3007949058551534
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the recent developments in artificial intelligence and machine learning,
anomalies in network traffic can be detected using machine learning approaches.
Before the rise of machine learning, network anomalies which could imply an
attack, were detected using well-crafted rules. An attacker who has knowledge
in the field of cyber-defence could make educated guesses to sometimes
accurately predict which particular features of network traffic data the
cyber-defence mechanism is looking at. With this information, the attacker can
circumvent a rule-based cyber-defense system. However, after the advancements
of machine learning for network anomaly, it is not easy for a human to
understand how to bypass a cyber-defence system. Recently, adversarial attacks
have become increasingly common to defeat machine learning algorithms. In this
paper, we show that even if we build a classifier and train it with adversarial
examples for network data, we can use adversarial attacks and successfully
break the system. We propose a Generative Adversarial Network(GAN)based
algorithm to generate data to train an efficient neural network based
classifier, and we subsequently break the system using adversarial attacks.
Related papers
- An anomaly detection approach for backdoored neural networks: face
recognition as a case study [77.92020418343022]
We propose a novel backdoored network detection method based on the principle of anomaly detection.
We test our method on a novel dataset of backdoored networks and report detectability results with perfect scores.
arXiv Detail & Related papers (2022-08-22T12:14:13Z) - Searching for the Essence of Adversarial Perturbations [73.96215665913797]
We show that adversarial perturbations contain human-recognizable information, which is the key conspirator responsible for a neural network's erroneous prediction.
This concept of human-recognizable information allows us to explain key features related to adversarial perturbations.
arXiv Detail & Related papers (2022-05-30T18:04:57Z) - Early Detection of Network Attacks Using Deep Learning [0.0]
A network intrusion detection system (IDS) is a tool used for identifying unauthorized and malicious behavior by observing the network traffic.
We propose an end-to-end early intrusion detection system to prevent network attacks before they could cause any more damage to the system under attack.
arXiv Detail & Related papers (2022-01-27T16:35:37Z) - Detect & Reject for Transferability of Black-box Adversarial Attacks
Against Network Intrusion Detection Systems [0.0]
We investigate the transferability of adversarial network traffic against machine learning-based intrusion detection systems.
We examine Detect & Reject as a defensive mechanism to limit the effect of the transferability property of adversarial network traffic against machine learning-based intrusion detection systems.
arXiv Detail & Related papers (2021-12-22T17:54:54Z) - A Heterogeneous Graph Learning Model for Cyber-Attack Detection [4.559898668629277]
A cyber-attack is a malicious attempt by hackers to breach the target information system.
This paper proposes an intelligent cyber-attack detection method based on provenance data.
Experiment results show that the proposed method outperforms other learning based detection models.
arXiv Detail & Related papers (2021-12-16T16:03:39Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - FooBaR: Fault Fooling Backdoor Attack on Neural Network Training [5.639451539396458]
We explore a novel attack paradigm by injecting faults during the training phase of a neural network in a way that the resulting network can be attacked during deployment without the necessity of further faulting.
We call such attacks fooling backdoors as the fault attacks at the training phase inject backdoors into the network that allow an attacker to produce fooling inputs.
arXiv Detail & Related papers (2021-09-23T09:43:19Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - TANTRA: Timing-Based Adversarial Network Traffic Reshaping Attack [46.79557381882643]
We present TANTRA, a novel end-to-end Timing-based Adversarial Network Traffic Reshaping Attack.
Our evasion attack utilizes a long short-term memory (LSTM) deep neural network (DNN) which is trained to learn the time differences between the target network's benign packets.
TANTRA achieves an average success rate of 99.99% in network intrusion detection system evasion.
arXiv Detail & Related papers (2021-03-10T19:03:38Z) - Firearm Detection and Segmentation Using an Ensemble of Semantic Neural
Networks [62.997667081978825]
We present a weapon detection system based on an ensemble of semantic Convolutional Neural Networks.
A set of simpler neural networks dedicated to specific tasks requires less computational resources and can be trained in parallel.
The overall output of the system given by the aggregation of the outputs of individual networks can be tuned by a user to trade-off false positives and false negatives.
arXiv Detail & Related papers (2020-02-11T13:58:16Z) - The Threat of Adversarial Attacks on Machine Learning in Network
Security -- A Survey [4.164845768197488]
Applications of machine learning in network security face a more disproportionate threat of active adversarial attacks compared to other domains.
In this survey, we first provide a taxonomy of machine learning techniques, tasks, and depth.
We examine various adversarial attacks against machine learning in network security and introduce two classification approaches for adversarial attacks in network security.
arXiv Detail & Related papers (2019-11-06T20:29:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.