Prepare for Trouble and Make it Double. Supervised and Unsupervised
Stacking for AnomalyBased Intrusion Detection
- URL: http://arxiv.org/abs/2202.13611v1
- Date: Mon, 28 Feb 2022 08:41:32 GMT
- Title: Prepare for Trouble and Make it Double. Supervised and Unsupervised
Stacking for AnomalyBased Intrusion Detection
- Authors: Tommaso Zoppi, Andrea Ceccarelli
- Abstract summary: We propose the adoption of meta-learning, in the form of a two-layer Stacker, to create a mixed approach that detects both known and unknown threats.
It turns out to be more effective in detecting zero-day attacks than supervised algorithms, limiting their main weakness but still maintaining adequate capabilities in detecting known attacks.
- Score: 4.56877715768796
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the last decades, researchers, practitioners and companies struggled in
devising mechanisms to detect malicious activities originating security
threats. Amongst the many solutions, network intrusion detection emerged as one
of the most popular to analyze network traffic and detect ongoing intrusions
based on rules or by means of Machine Learners (MLs), which process such
traffic and learn a model to suspect intrusions. Supervised MLs are very
effective in detecting known threats, but struggle in identifying zero-day
attacks (unknown during learning phase), which instead can be detected through
unsupervised MLs. Unfortunately, there are no definitive answers on the
combined use of both approaches for network intrusion detection. In this paper
we first expand the problem of zero-day attacks and motivate the need to
combine supervised and unsupervised algorithms. We propose the adoption of
meta-learning, in the form of a two-layer Stacker, to create a mixed approach
that detects both known and unknown threats. Then we implement and empirically
evaluate our Stacker through an experimental campaign that allows i) debating
on meta-features crafted through unsupervised base-level learners, ii) electing
the most promising supervised meta-level classifiers, and iii) benchmarking
classification scores of the Stacker with respect to supervised and
unsupervised classifiers. Last, we compare our solution with existing works
from the recent literature. Overall, our Stacker reduces misclassifications
with respect to (un)supervised ML algorithms in all the 7 public datasets we
considered, and outperforms existing studies in 6 out of those 7 datasets. In
particular, it turns out to be more effective in detecting zero-day attacks
than supervised algorithms, limiting their main weakness but still maintaining
adequate capabilities in detecting known attacks.
Related papers
- Attention Tracker: Detecting Prompt Injection Attacks in LLMs [62.247841717696765]
Large Language Models (LLMs) have revolutionized various domains but remain vulnerable to prompt injection attacks.
We introduce the concept of the distraction effect, where specific attention heads shift focus from the original instruction to the injected instruction.
We propose Attention Tracker, a training-free detection method that tracks attention patterns on instruction to detect prompt injection attacks.
arXiv Detail & Related papers (2024-11-01T04:05:59Z) - Federated Learning for Zero-Day Attack Detection in 5G and Beyond V2X Networks [9.86830550255822]
Connected and Automated Vehicles (CAVs) on top of 5G and Beyond networks (5GB) make them vulnerable to increasing vectors of security and privacy attacks.
We propose in this paper a novel detection mechanism that leverages the ability of the deep auto-encoder method to detect attacks relying only on the benign network traffic pattern.
Using federated learning, the proposed intrusion detection system can be trained with large and diverse benign network traffic, while preserving the CAVs privacy, and minimizing the communication overhead.
arXiv Detail & Related papers (2024-07-03T12:42:31Z) - The Adversarial Implications of Variable-Time Inference [47.44631666803983]
We present an approach that exploits a novel side channel in which the adversary simply measures the execution time of the algorithm used to post-process the predictions of the ML model under attack.
We investigate leakage from the non-maximum suppression (NMS) algorithm, which plays a crucial role in the operation of object detectors.
We demonstrate attacks against the YOLOv3 detector, leveraging the timing leakage to successfully evade object detection using adversarial examples, and perform dataset inference.
arXiv Detail & Related papers (2023-09-05T11:53:17Z) - Few-shot Weakly-supervised Cybersecurity Anomaly Detection [1.179179628317559]
We propose an enhancement to an existing few-shot weakly-supervised deep learning anomaly detection framework.
This framework incorporates data augmentation, representation learning and ordinal regression.
We then evaluated and showed the performance of our implemented framework on three benchmark datasets.
arXiv Detail & Related papers (2023-04-15T04:37:54Z) - A Hybrid Deep Learning Anomaly Detection Framework for Intrusion
Detection [4.718295605140562]
We propose a three-stage deep learning anomaly detection based network intrusion attack detection framework.
The framework comprises an integration of unsupervised (K-means clustering), semi-supervised (GANomaly) and supervised learning (CNN) algorithms.
We then evaluated and showed the performance of our implemented framework on three benchmark datasets.
arXiv Detail & Related papers (2022-12-02T04:40:54Z) - Illusory Attacks: Information-Theoretic Detectability Matters in Adversarial Attacks [76.35478518372692]
We introduce epsilon-illusory, a novel form of adversarial attack on sequential decision-makers.
Compared to existing attacks, we empirically find epsilon-illusory to be significantly harder to detect with automated methods.
Our findings suggest the need for better anomaly detectors, as well as effective hardware- and system-level defenses.
arXiv Detail & Related papers (2022-07-20T19:49:09Z) - A False Sense of Security? Revisiting the State of Machine
Learning-Based Industrial Intrusion Detection [9.924435476552702]
Anomaly-based intrusion detection promises to detect novel or unknown attacks on industrial control systems.
Research focuses on machine learning to train them automatically, achieving detection rates upwards of 99%.
Our results highlight an ineffectiveness in detecting unknown attacks, with detection rates dropping to between 3.2% and 14.7%.
arXiv Detail & Related papers (2022-05-18T20:17:33Z) - MURAL: Meta-Learning Uncertainty-Aware Rewards for Outcome-Driven
Reinforcement Learning [65.52675802289775]
We show that an uncertainty aware classifier can solve challenging reinforcement learning problems.
We propose a novel method for computing the normalized maximum likelihood (NML) distribution.
We show that the resulting algorithm has a number of intriguing connections to both count-based exploration methods and prior algorithms for learning reward functions.
arXiv Detail & Related papers (2021-07-15T08:19:57Z) - Anomaly Detection in Cybersecurity: Unsupervised, Graph-Based and
Supervised Learning Methods in Adversarial Environments [63.942632088208505]
Inherent to today's operating environment is the practice of adversarial machine learning.
In this work, we examine the feasibility of unsupervised learning and graph-based methods for anomaly detection.
We incorporate a realistic adversarial training mechanism when training our supervised models to enable strong classification performance in adversarial environments.
arXiv Detail & Related papers (2021-05-14T10:05:10Z) - Generalized Insider Attack Detection Implementation using NetFlow Data [0.6236743421605786]
We study an approach centered on using network data to identify attacks.
Our work builds on unsupervised machine learning techniques such as One-Class SVM and bi-clustering.
We show that our approach is a promising tool for insider attack detection in realistic settings.
arXiv Detail & Related papers (2020-10-27T14:00:31Z) - Anomaly Detection-Based Unknown Face Presentation Attack Detection [74.4918294453537]
Anomaly detection-based spoof attack detection is a recent development in face Presentation Attack Detection.
In this paper, we present a deep-learning solution for anomaly detection-based spoof attack detection.
The proposed approach benefits from the representation learning power of the CNNs and learns better features for fPAD task.
arXiv Detail & Related papers (2020-07-11T21:20:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.