SFE-GACN: A Novel Unknown Attack Detection Method Using Intra Categories
Generation in Embedding Space
- URL: http://arxiv.org/abs/2004.05693v2
- Date: Wed, 17 Mar 2021 14:54:00 GMT
- Title: SFE-GACN: A Novel Unknown Attack Detection Method Using Intra Categories
Generation in Embedding Space
- Authors: Ao Liu, Yunpeng Wang, Tao Li
- Abstract summary: In the encrypted network traffic intrusion detection, deep learning based schemes have attracted lots of attention.
In this paper, we propose a novel unknown attack detection method based on Intra Categories Generation in Embedding Space.
The detection results show that, compared to the state-of-the-art method, the average TPR is 8.38% higher, and the average FPR is 12.77% lower.
- Score: 15.539505627198109
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In the encrypted network traffic intrusion detection, deep learning based
schemes have attracted lots of attention. However, in real-world scenarios,
data is often insufficient (few-shot), which leads to various deviations
between the models prediction and the ground truth. Consequently, downstream
tasks such as unknown attack detection based on few-shot will be limited by
insufficient data. In this paper, we propose a novel unknown attack detection
method based on Intra Categories Generation in Embedding Space, namely
SFE-GACN, which might be the solution of few-shot problem. Concretely, we first
proposed Session Feature Embedding (SFE) to summarize the context of sessions
(session is the basic granularity of network traffic), bring the insufficient
data to the pre-trained embedding space. In this way, we achieve the goal of
preliminary information extension in the few-shot case. Second, we further
propose the Generative Adversarial Cooperative Network (GACN), which improves
the conventional Generative Adversarial Network by supervising the generated
sample to avoid falling into similar categories, and thus enables samples to
generate intra categories. Our proposed SFE-GACN can accurately generate
session samples in the case of few-shot, and ensure the difference between
categories during data augmentation. The detection results show that, compared
to the state-of-the-art method, the average TPR is 8.38% higher, and the
average FPR is 12.77% lower. In addition, we evaluated the graphics generation
capabilities of GACN on the graphics dataset, the result shows our proposed
GACN can be popularized for generating easy-confused multi-categories graphics.
Related papers
- Towards Cross-domain Few-shot Graph Anomaly Detection [6.732699844225434]
Cross-domain few-shot graph anomaly detection (GAD) is nontrivial owing to inherent data distribution discrepancies between the source and target domains.
We propose a simple and effective framework, termed CDFS-GAD, specifically designed to tackle the aforementioned challenges.
arXiv Detail & Related papers (2024-10-11T08:47:25Z) - HGAttack: Transferable Heterogeneous Graph Adversarial Attack [63.35560741500611]
Heterogeneous Graph Neural Networks (HGNNs) are increasingly recognized for their performance in areas like the web and e-commerce.
This paper introduces HGAttack, the first dedicated gray box evasion attack method for heterogeneous graphs.
arXiv Detail & Related papers (2024-01-18T12:47:13Z) - A Parameterized Generative Adversarial Network Using Cyclic Projection
for Explainable Medical Image Classification [17.26012062961371]
ParaGAN is a parameterized GAN that effectively controls the changes of synthetic samples among domains and highlights the attention regions for downstream classification.
Our experiments show that ParaGAN can consistently outperform the existing augmentation methods with explainable classification on two small-scale medical datasets.
arXiv Detail & Related papers (2023-11-24T10:07:14Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - GAN Based Boundary Aware Classifier for Detecting Out-of-distribution
Samples [24.572516991009323]
We propose a GAN based boundary aware classifier (GBAC) for generating a closed hyperspace which only contains most id data.
Our method is based on the fact that the traditional neural net seperates the feature space as several unclosed regions which are not suitable for ood detection.
With GBAC as an auxiliary module, the ood data distributed outside the closed hyperspace will be assigned with much lower score, allowing more effective ood detection.
arXiv Detail & Related papers (2021-12-22T03:35:54Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Selective and Features based Adversarial Example Detection [12.443388374869745]
Security-sensitive applications that relay on Deep Neural Networks (DNNs) are vulnerable to small perturbations crafted to generate Adversarial Examples (AEs)
We propose a novel unsupervised detection mechanism that uses the selective prediction, processing model layers outputs, and knowledge transfer concepts in a multi-task learning setting.
Experimental results show that the proposed approach achieves comparable results to the state-of-the-art methods against tested attacks in white box scenario and better results in black and gray boxes scenarios.
arXiv Detail & Related papers (2021-03-09T11:06:15Z) - A Biased Graph Neural Network Sampler with Near-Optimal Regret [57.70126763759996]
Graph neural networks (GNN) have emerged as a vehicle for applying deep network architectures to graph and relational data.
In this paper, we build upon existing work and treat GNN neighbor sampling as a multi-armed bandit problem.
We introduce a newly-designed reward function that introduces some degree of bias designed to reduce variance and avoid unstable, possibly-unbounded payouts.
arXiv Detail & Related papers (2021-03-01T15:55:58Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.