Stacking an autoencoder for feature selection of zero-day threats
- URL: http://arxiv.org/abs/2311.00304v1
- Date: Wed, 1 Nov 2023 05:29:42 GMT
- Title: Stacking an autoencoder for feature selection of zero-day threats
- Authors: Mahmut Tokmak, and Mike Nkongolo
- Abstract summary: This study explores the application of stacked autoencoder (SAE), a type of artificial neural network, for feature selection and zero-day threat classification.
The learned weights and activations of the autoencoder are analyzed to identify the most important features for discriminating between zero-day threats and normal system behavior.
The results indicate that the SAE-LSTM performs well across all three attack categories by showcasing high precision, recall, and F1 score values.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Zero-day attack detection plays a critical role in mitigating risks,
protecting assets, and staying ahead in the evolving threat landscape. This
study explores the application of stacked autoencoder (SAE), a type of
artificial neural network, for feature selection and zero-day threat
classification using a Long Short-Term Memory (LSTM) scheme. The process
involves preprocessing the UGRansome dataset and training an unsupervised SAE
for feature extraction. Finetuning with supervised learning is then performed
to enhance the discriminative capabilities of this model. The learned weights
and activations of the autoencoder are analyzed to identify the most important
features for discriminating between zero-day threats and normal system
behavior. These selected features form a reduced feature set that enables
accurate classification. The results indicate that the SAE-LSTM performs well
across all three attack categories by showcasing high precision, recall, and F1
score values, emphasizing the model's strong predictive capabilities in
identifying various types of zero-day attacks. Additionally, the balanced
average scores of the SAE-LSTM suggest that the model generalizes effectively
and consistently across different attack categories.
Related papers
- An Adversarial Approach to Evaluating the Robustness of Event Identification Models [12.862865254507179]
This paper considers a physics-based modal decomposition method to extract features for event classification.
The resulting classifiers are tested against an adversarial algorithm to evaluate their robustness.
arXiv Detail & Related papers (2024-02-19T18:11:37Z) - Ransomware detection using stacked autoencoder for feature selection [0.0]
The study meticulously analyzes the autoencoder's learned weights and activations to identify essential features for distinguishing ransomware families from other malware.
The proposed model achieves an exceptional 99% accuracy in ransomware classification, surpassing the Extreme Gradient Boosting (XGBoost) algorithm.
arXiv Detail & Related papers (2024-02-17T17:31:48Z) - Semi-Supervised Class-Agnostic Motion Prediction with Pseudo Label
Regeneration and BEVMix [59.55173022987071]
We study the potential of semi-supervised learning for class-agnostic motion prediction.
Our framework adopts a consistency-based self-training paradigm, enabling the model to learn from unlabeled data.
Our method exhibits comparable performance to weakly and some fully supervised methods.
arXiv Detail & Related papers (2023-12-13T09:32:50Z) - Zero Day Threat Detection Using Metric Learning Autoencoders [3.1965908200266173]
The proliferation of zero-day threats (ZDTs) to companies' networks has been immensely costly.
Deep learning methods are an attractive option for their ability to capture highly-nonlinear behavior patterns.
The models presented here are also trained and evaluated with two more datasets, and continue to show promising results even when generalizing to new network topologies.
arXiv Detail & Related papers (2022-11-01T13:12:20Z) - Self-Supervised Class Incremental Learning [51.62542103481908]
Existing Class Incremental Learning (CIL) methods are based on a supervised classification framework sensitive to data labels.
When updating them based on the new class data, they suffer from catastrophic forgetting: the model cannot discern old class data clearly from the new.
In this paper, we explore the performance of Self-Supervised representation learning in Class Incremental Learning (SSCIL) for the first time.
arXiv Detail & Related papers (2021-11-18T06:58:19Z) - From Zero-Shot Machine Learning to Zero-Day Attack Detection [3.6704226968275258]
In certain applications such as Network Intrusion Detection Systems, it is challenging to obtain data samples for all attack classes that the model will most likely observe in production.
In this paper, a zero-shot learning methodology has been proposed to evaluate the ML model performance in the detection of zero-day attack scenarios.
arXiv Detail & Related papers (2021-09-30T06:23:00Z) - SCARF: Self-Supervised Contrastive Learning using Random Feature
Corruption [72.35532598131176]
We propose SCARF, a technique for contrastive learning, where views are formed by corrupting a random subset of features.
We show that SCARF complements existing strategies and outperforms alternatives like autoencoders.
arXiv Detail & Related papers (2021-06-29T08:08:33Z) - Zero-shot learning approach to adaptive Cybersecurity using Explainable
AI [0.5076419064097734]
We present a novel approach to handle the alarm flooding problem faced by Cybersecurity systems like security information and event management (SIEM) and intrusion detection (IDS)
We apply a zero-shot learning method to machine learning (ML) by leveraging explanations for predictions of anomalies generated by a ML model.
In this approach, without any prior knowledge of attack, we try to identify it, decipher the features that contribute to classification and try to bucketize the attack in a specific category.
arXiv Detail & Related papers (2021-06-21T06:29:13Z) - Adaptive Feature Alignment for Adversarial Training [56.17654691470554]
CNNs are typically vulnerable to adversarial attacks, which pose a threat to security-sensitive applications.
We propose the adaptive feature alignment (AFA) to generate features of arbitrary attacking strengths.
Our method is trained to automatically align features of arbitrary attacking strength.
arXiv Detail & Related papers (2021-05-31T17:01:05Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Certified Robustness to Label-Flipping Attacks via Randomized Smoothing [105.91827623768724]
Machine learning algorithms are susceptible to data poisoning attacks.
We present a unifying view of randomized smoothing over arbitrary functions.
We propose a new strategy for building classifiers that are pointwise-certifiably robust to general data poisoning attacks.
arXiv Detail & Related papers (2020-02-07T21:28:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.