On the Evaluation of Sequential Machine Learning for Network Intrusion
Detection
- URL: http://arxiv.org/abs/2106.07961v1
- Date: Tue, 15 Jun 2021 08:29:28 GMT
- Title: On the Evaluation of Sequential Machine Learning for Network Intrusion
Detection
- Authors: Andrea Corsini, Shanchieh Jay Yang, Giovanni Apruzzese
- Abstract summary: We propose a detailed methodology to extract temporal sequences of NetFlows that denote patterns of malicious activities.
We then apply this methodology to compare the efficacy of sequential learning models against traditional static learning models.
- Score: 3.093890460224435
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recent advances in deep learning renewed the research interests in machine
learning for Network Intrusion Detection Systems (NIDS). Specifically,
attention has been given to sequential learning models, due to their ability to
extract the temporal characteristics of Network traffic Flows (NetFlows), and
use them for NIDS tasks. However, the applications of these sequential models
often consist of transferring and adapting methodologies directly from other
fields, without an in-depth investigation on how to leverage the specific
circumstances of cybersecurity scenarios; moreover, there is a lack of
comprehensive studies on sequential models that rely on NetFlow data, which
presents significant advantages over traditional full packet captures. We
tackle this problem in this paper. We propose a detailed methodology to extract
temporal sequences of NetFlows that denote patterns of malicious activities.
Then, we apply this methodology to compare the efficacy of sequential learning
models against traditional static learning models. In particular, we perform a
fair comparison of a `sequential' Long Short-Term Memory (LSTM) against a
`static' Feedforward Neural Networks (FNN) in distinct environments represented
by two well-known datasets for NIDS: the CICIDS2017 and the CTU13. Our results
highlight that LSTM achieves comparable performance to FNN in the CICIDS2017
with over 99.5\% F1-score; while obtaining superior performance in the CTU13,
with 95.7\% F1-score against 91.5\%. This paper thus paves the way to future
applications of sequential learning models for NIDS.
Related papers
- Spiking Neural Networks in Vertical Federated Learning: Performance Trade-offs [2.1756721838833797]
Federated machine learning enables model training across multiple clients.
Vertical Federated Learning (VFL) deals with instances where the clients have different feature sets of the same samples.
Spiking Neural Networks (SNNs) are being leveraged to enable fast and accurate processing at the edge.
arXiv Detail & Related papers (2024-07-24T23:31:02Z) - Few-shot Learning using Data Augmentation and Time-Frequency
Transformation for Time Series Classification [6.830148185797109]
We propose a novel few-shot learning framework through data augmentation.
We also develop a sequence-spectrogram neural network (SSNN)
Our methodology demonstrates its applicability of addressing the few-shot problems for time series classification.
arXiv Detail & Related papers (2023-11-06T15:32:50Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks [81.56822938033119]
Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
arXiv Detail & Related papers (2023-04-03T13:25:22Z) - Online Evolutionary Neural Architecture Search for Multivariate
Non-Stationary Time Series Forecasting [72.89994745876086]
This work presents the Online Neuro-Evolution-based Neural Architecture Search (ONE-NAS) algorithm.
ONE-NAS is a novel neural architecture search method capable of automatically designing and dynamically training recurrent neural networks (RNNs) for online forecasting tasks.
Results demonstrate that ONE-NAS outperforms traditional statistical time series forecasting methods.
arXiv Detail & Related papers (2023-02-20T22:25:47Z) - A Comparative Study of Detecting Anomalies in Time Series Data Using
LSTM and TCN Models [2.007262412327553]
This paper compares two prominent deep learning modeling techniques.
The Recurrent Neural Network (RNN)-based Long Short-Term Memory (LSTM) and the convolutional Neural Network (CNN)-based Temporal Convolutional Networks (TCN) are compared.
arXiv Detail & Related papers (2021-12-17T02:46:55Z) - Sequential Deep Learning Architectures for Anomaly Detection in Virtual
Network Function Chains [0.0]
anomaly detection system (ADS) for virtual network functions in service function chains (SFCs)
We propose several sequential deep learning models to learn time-series patterns and sequential patterns of the virtual network functions (VNFs) in the chain with variable lengths.
arXiv Detail & Related papers (2021-09-29T08:47:57Z) - Online learning of windmill time series using Long Short-term Cognitive
Networks [58.675240242609064]
The amount of data generated on windmill farms makes online learning the most viable strategy to follow.
We use Long Short-term Cognitive Networks (LSTCNs) to forecast windmill time series in online settings.
Our approach reported the lowest forecasting errors with respect to a simple RNN, a Long Short-term Memory, a Gated Recurrent Unit, and a Hidden Markov Model.
arXiv Detail & Related papers (2021-07-01T13:13:24Z) - Gone Fishing: Neural Active Learning with Fisher Embeddings [55.08537975896764]
There is an increasing need for active learning algorithms that are compatible with deep neural networks.
This article introduces BAIT, a practical representation of tractable, and high-performing active learning algorithm for neural networks.
arXiv Detail & Related papers (2021-06-17T17:26:31Z) - Continual Learning in Recurrent Neural Networks [67.05499844830231]
We evaluate the effectiveness of continual learning methods for processing sequential data with recurrent neural networks (RNNs)
We shed light on the particularities that arise when applying weight-importance methods, such as elastic weight consolidation, to RNNs.
We show that the performance of weight-importance methods is not directly affected by the length of the processed sequences, but rather by high working memory requirements.
arXiv Detail & Related papers (2020-06-22T10:05:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.