Assessing the Impact of a Supervised Classification Filter on Flow-based
Hybrid Network Anomaly Detection
- URL: http://arxiv.org/abs/2310.06656v1
- Date: Tue, 10 Oct 2023 14:30:04 GMT
- Title: Assessing the Impact of a Supervised Classification Filter on Flow-based
Hybrid Network Anomaly Detection
- Authors: Dominik Macko, Patrik Goldschmidt, Peter Pi\v{s}tek, Daniela Chud\'a
- Abstract summary: This paper aims to measure the impact of a supervised filter (classifier) in network anomaly detection.
We extend a state-of-the-art autoencoder-based anomaly detection method by prepending a binary classifier acting as a prefilter for the anomaly detector.
Our empirical results indicate that the hybrid approach does offer a higher detection rate of known attacks than a standalone anomaly detector.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Constant evolution and the emergence of new cyberattacks require the
development of advanced techniques for defense. This paper aims to measure the
impact of a supervised filter (classifier) in network anomaly detection. We
perform our experiments by employing a hybrid anomaly detection approach in
network flow data. For this purpose, we extended a state-of-the-art
autoencoder-based anomaly detection method by prepending a binary classifier
acting as a prefilter for the anomaly detector. The method was evaluated on the
publicly available real-world dataset UGR'16. Our empirical results indicate
that the hybrid approach does offer a higher detection rate of known attacks
than a standalone anomaly detector while still retaining the ability to detect
zero-day attacks. Employing a supervised binary prefilter has increased the AUC
metric by over 11%, detecting 30% more attacks while keeping the number of
false positives approximately the same.
Related papers
- LLM-based Continuous Intrusion Detection Framework for Next-Gen Networks [0.7100520098029439]
The framework employs a transformer encoder architecture, which captures hidden patterns in a bidirectional manner to differentiate between malicious and legitimate traffic.
The system incrementally identifies unknown attack types by leveraging a Gaussian Mixture Model (GMM) to cluster features derived from high-dimensional BERT embeddings.
Even after integrating additional unknown attack clusters, the framework continues to perform at a high level, achieving 95.6% in both classification accuracy and recall.
arXiv Detail & Related papers (2024-11-04T18:12:14Z) - On the Universal Adversarial Perturbations for Efficient Data-free
Adversarial Detection [55.73320979733527]
We propose a data-agnostic adversarial detection framework, which induces different responses between normal and adversarial samples to UAPs.
Experimental results show that our method achieves competitive detection performance on various text classification tasks.
arXiv Detail & Related papers (2023-06-27T02:54:07Z) - Self-Supervised Masked Convolutional Transformer Block for Anomaly
Detection [122.4894940892536]
We present a novel self-supervised masked convolutional transformer block (SSMCTB) that comprises the reconstruction-based functionality at a core architectural level.
In this work, we extend our previous self-supervised predictive convolutional attentive block (SSPCAB) with a 3D masked convolutional layer, a transformer for channel-wise attention, as well as a novel self-supervised objective based on Huber loss.
arXiv Detail & Related papers (2022-09-25T04:56:10Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - Anomaly Detection of Test-Time Evasion Attacks using Class-conditional
Generative Adversarial Networks [21.023722317810805]
We propose an attack detector based on classconditional Generative Adversaratives (GAN)
We model the distribution of clean data conditioned on a predicted class label by an Auxiliary GAN (ACGAN)
Experiments on image classification datasets under different TTE attack methods show that our method outperforms state-of-the-art detection methods.
arXiv Detail & Related papers (2021-05-21T02:51:58Z) - Robust and Accurate Object Detection via Adversarial Learning [111.36192453882195]
This work augments the fine-tuning stage for object detectors by exploring adversarial examples.
Our approach boosts the performance of state-of-the-art EfficientDets by +1.1 mAP on the object detection benchmark.
arXiv Detail & Related papers (2021-03-23T19:45:26Z) - Detecting Backdoors in Neural Networks Using Novel Feature-Based Anomaly
Detection [16.010654200489913]
This paper proposes a new defense against neural network backdooring attacks.
It is based on the intuition that the feature extraction layers of a backdoored network embed new features to detect the presence of a trigger.
To detect backdoors, the proposed defense uses two synergistic anomaly detectors trained on clean validation data.
arXiv Detail & Related papers (2020-11-04T20:33:51Z) - Bayesian Optimization with Machine Learning Algorithms Towards Anomaly
Detection [66.05992706105224]
In this paper, an effective anomaly detection framework is proposed utilizing Bayesian Optimization technique.
The performance of the considered algorithms is evaluated using the ISCX 2012 dataset.
Experimental results show the effectiveness of the proposed framework in term of accuracy rate, precision, low-false alarm rate, and recall.
arXiv Detail & Related papers (2020-08-05T19:29:35Z) - Unsupervised Anomaly Detection with Adversarial Mirrored AutoEncoders [51.691585766702744]
We propose a variant of Adversarial Autoencoder which uses a mirrored Wasserstein loss in the discriminator to enforce better semantic-level reconstruction.
We put forward an alternative measure of anomaly score to replace the reconstruction-based metric.
Our method outperforms the current state-of-the-art methods for anomaly detection on several OOD detection benchmarks.
arXiv Detail & Related papers (2020-03-24T08:26:58Z) - Non-Intrusive Detection of Adversarial Deep Learning Attacks via
Observer Networks [5.4572790062292125]
Recent studies have shown that deep learning models are vulnerable to crafted adversarial inputs.
We propose a novel method to detect adversarial inputs by augmenting the main classification network with multiple binary detectors.
We achieve a 99.5% detection accuracy on the MNIST dataset and 97.5% on the CIFAR-10 dataset.
arXiv Detail & Related papers (2020-02-22T21:13:00Z) - Regularized Cycle Consistent Generative Adversarial Network for Anomaly
Detection [5.457279006229213]
We propose a new Regularized Cycle Consistent Generative Adversarial Network (RCGAN) in which deep neural networks are adversarially trained to better recognize anomalous samples.
Experimental results on both real-world and synthetic data show that our model leads to significant and consistent improvements on previous anomaly detection benchmarks.
arXiv Detail & Related papers (2020-01-18T03:35:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.