From Zero-Shot Machine Learning to Zero-Day Attack Detection
- URL: http://arxiv.org/abs/2109.14868v1
- Date: Thu, 30 Sep 2021 06:23:00 GMT
- Title: From Zero-Shot Machine Learning to Zero-Day Attack Detection
- Authors: Mohanad Sarhan, Siamak Layeghy, Marcus Gallagher and Marius Portmann
- Abstract summary: In certain applications such as Network Intrusion Detection Systems, it is challenging to obtain data samples for all attack classes that the model will most likely observe in production.
In this paper, a zero-shot learning methodology has been proposed to evaluate the ML model performance in the detection of zero-day attack scenarios.
- Score: 3.6704226968275258
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The standard ML methodology assumes that the test samples are derived from a
set of pre-observed classes used in the training phase. Where the model
extracts and learns useful patterns to detect new data samples belonging to the
same data classes. However, in certain applications such as Network Intrusion
Detection Systems, it is challenging to obtain data samples for all attack
classes that the model will most likely observe in production. ML-based NIDSs
face new attack traffic known as zero-day attacks, that are not used in the
training of the learning models due to their non-existence at the time. In this
paper, a zero-shot learning methodology has been proposed to evaluate the ML
model performance in the detection of zero-day attack scenarios. In the
attribute learning stage, the ML models map the network data features to
distinguish semantic attributes from known attack (seen) classes. In the
inference stage, the models are evaluated in the detection of zero-day attack
(unseen) classes by constructing the relationships between known attacks and
zero-day attacks. A new metric is defined as Zero-day Detection Rate, which
measures the effectiveness of the learning model in the inference stage. The
results demonstrate that while the majority of the attack classes do not
represent significant risks to organisations adopting an ML-based NIDS in a
zero-day attack scenario. However, for certain attack groups identified in this
paper, such systems are not effective in applying the learnt attributes of
attack behaviour to detect them as malicious. Further Analysis was conducted
using the Wasserstein Distance technique to measure how different such attacks
are from other attack types used in the training of the ML model. The results
demonstrate that sophisticated attacks with a low zero-day detection rate have
a significantly distinct feature distribution compared to the other attack
classes.
Related papers
- FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - DALA: A Distribution-Aware LoRA-Based Adversarial Attack against
Language Models [64.79319733514266]
Adversarial attacks can introduce subtle perturbations to input data.
Recent attack methods can achieve a relatively high attack success rate (ASR)
We propose a Distribution-Aware LoRA-based Adversarial Attack (DALA) method.
arXiv Detail & Related papers (2023-11-14T23:43:47Z) - Boosting Model Inversion Attacks with Adversarial Examples [26.904051413441316]
We propose a new training paradigm for a learning-based model inversion attack that can achieve higher attack accuracy in a black-box setting.
First, we regularize the training process of the attack model with an added semantic loss function.
Second, we inject adversarial examples into the training data to increase the diversity of the class-related parts.
arXiv Detail & Related papers (2023-06-24T13:40:58Z) - Unleashing Mask: Explore the Intrinsic Out-of-Distribution Detection
Capability [70.72426887518517]
Out-of-distribution (OOD) detection is an indispensable aspect of secure AI when deploying machine learning models in real-world applications.
We propose a novel method, Unleashing Mask, which aims to restore the OOD discriminative capabilities of the well-trained model with ID data.
Our method utilizes a mask to figure out the memorized atypical samples, and then finetune the model or prune it with the introduced mask to forget them.
arXiv Detail & Related papers (2023-06-06T14:23:34Z) - Towards Generating Adversarial Examples on Mixed-type Data [32.41305735919529]
We propose a novel attack algorithm M-Attack, which can effectively generate adversarial examples in mixed-type data.
Based on M-Attack, attackers can attempt to mislead the targeted classification model's prediction, by only slightly perturbing both the numerical and categorical features in the given data samples.
Our generated adversarial examples can evade potential detection models, which makes the attack indeed insidious.
arXiv Detail & Related papers (2022-10-17T20:17:21Z) - Membership Inference Attacks by Exploiting Loss Trajectory [19.900473800648243]
We propose a new attack method, called system, which can exploit the membership information from the whole training process of the target model.
Our attack achieves at least 6$times$ higher true-positive rate at a low false-positive rate of 0.1% than existing methods.
arXiv Detail & Related papers (2022-08-31T16:02:26Z) - Multi-concept adversarial attacks [13.538643599990785]
Test time attacks targeting a single ML model often neglect their impact on other ML models.
We develop novel attack techniques that can simultaneously attack one set of ML models while preserving the accuracy of the other.
arXiv Detail & Related papers (2021-10-19T22:14:19Z) - ML-Doctor: Holistic Risk Assessment of Inference Attacks Against Machine
Learning Models [64.03398193325572]
Inference attacks against Machine Learning (ML) models allow adversaries to learn about training data, model parameters, etc.
We concentrate on four attacks - namely, membership inference, model inversion, attribute inference, and model stealing.
Our analysis relies on a modular re-usable software, ML-Doctor, which enables ML model owners to assess the risks of deploying their models.
arXiv Detail & Related papers (2021-02-04T11:35:13Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z) - Understanding Classifier Mistakes with Generative Models [88.20470690631372]
Deep neural networks are effective on supervised learning tasks, but have been shown to be brittle.
In this paper, we leverage generative models to identify and characterize instances where classifiers fail to generalize.
Our approach is agnostic to class labels from the training set which makes it applicable to models trained in a semi-supervised way.
arXiv Detail & Related papers (2020-10-05T22:13:21Z) - Leveraging Siamese Networks for One-Shot Intrusion Detection Model [0.0]
Supervised Machine Learning (ML) to enhance Intrusion Detection Systems has been the subject of significant research.
retraining the models in-situ renders the network susceptible to attacks owing to the time-window required to acquire a sufficient volume of data.
Here, a complementary approach referred to as 'One-Shot Learning', whereby a limited number of examples of a new attack-class is used to identify a new attack-class.
A Siamese Network is trained to differentiate between classes based on pairs similarities, rather than features, allowing to identify new and previously unseen attacks.
arXiv Detail & Related papers (2020-06-27T11:40:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.