Sequential Deep Learning Architectures for Anomaly Detection in Virtual
Network Function Chains
- URL: http://arxiv.org/abs/2109.14276v1
- Date: Wed, 29 Sep 2021 08:47:57 GMT
- Title: Sequential Deep Learning Architectures for Anomaly Detection in Virtual
Network Function Chains
- Authors: Chungjun Lee, Jibum Hong, DongNyeong Heo, Heeyoul Choi
- Abstract summary: anomaly detection system (ADS) for virtual network functions in service function chains (SFCs)
We propose several sequential deep learning models to learn time-series patterns and sequential patterns of the virtual network functions (VNFs) in the chain with variable lengths.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Software-defined networking (SDN) and network function virtualization (NFV)
have enabled the efficient provision of network service. However, they also
raised new tasks to monitor and ensure the status of virtualized service, and
anomaly detection is one of such tasks. There have been many data-driven
approaches to implement anomaly detection system (ADS) for virtual network
functions in service function chains (SFCs). In this paper, we aim to develop
more advanced deep learning models for ADS. Previous approaches used learning
algorithms such as random forest (RF), gradient boosting machine (GBM), or deep
neural networks (DNNs). However, these models have not utilized sequential
dependencies in the data. Furthermore, they are limited as they can only apply
to the SFC setting from which they were trained. Therefore, we propose several
sequential deep learning models to learn time-series patterns and sequential
patterns of the virtual network functions (VNFs) in the chain with variable
lengths. As a result, the suggested models improve detection performance and
apply to SFCs with varying numbers of VNFs.
Related papers
- NIDS Neural Networks Using Sliding Time Window Data Processing with Trainable Activations and its Generalization Capability [0.0]
This paper presents neural networks for network intrusion detection systems (NIDS) that operate on flow data preprocessed with a time window.
It requires only eleven features which do not rely on deep packet inspection and can be found in most NIDS datasets and easily obtained from conventional flow collectors.
The reported training accuracy exceeds 99% for the proposed method with as little as twenty neural network input features.
arXiv Detail & Related papers (2024-10-24T11:36:19Z) - GradINN: Gradient Informed Neural Network [2.287415292857564]
We propose a methodology inspired by Physics Informed Neural Networks (PINNs)
GradINNs leverage prior beliefs about a system's gradient to constrain the predicted function's gradient across all input dimensions.
We demonstrate the advantages of GradINNs, particularly in low-data regimes, on diverse problems spanning non time-dependent systems.
arXiv Detail & Related papers (2024-09-03T14:03:29Z) - Optimization Guarantees of Unfolded ISTA and ADMM Networks With Smooth
Soft-Thresholding [57.71603937699949]
We study optimization guarantees, i.e., achieving near-zero training loss with the increase in the number of learning epochs.
We show that the threshold on the number of training samples increases with the increase in the network width.
arXiv Detail & Related papers (2023-09-12T13:03:47Z) - How neural networks learn to classify chaotic time series [77.34726150561087]
We study the inner workings of neural networks trained to classify regular-versus-chaotic time series.
We find that the relation between input periodicity and activation periodicity is key for the performance of LKCNN models.
arXiv Detail & Related papers (2023-06-04T08:53:27Z) - Properties and Potential Applications of Random Functional-Linked Types
of Neural Networks [81.56822938033119]
Random functional-linked neural networks (RFLNNs) offer an alternative way of learning in deep structure.
This paper gives some insights into the properties of RFLNNs from the viewpoints of frequency domain.
We propose a method to generate a BLS network with better performance, and design an efficient algorithm for solving Poison's equation.
arXiv Detail & Related papers (2023-04-03T13:25:22Z) - Learning to Learn with Generative Models of Neural Network Checkpoints [71.06722933442956]
We construct a dataset of neural network checkpoints and train a generative model on the parameters.
We find that our approach successfully generates parameters for a wide range of loss prompts.
We apply our method to different neural network architectures and tasks in supervised and reinforcement learning.
arXiv Detail & Related papers (2022-09-26T17:59:58Z) - Online learning of windmill time series using Long Short-term Cognitive
Networks [58.675240242609064]
The amount of data generated on windmill farms makes online learning the most viable strategy to follow.
We use Long Short-term Cognitive Networks (LSTCNs) to forecast windmill time series in online settings.
Our approach reported the lowest forecasting errors with respect to a simple RNN, a Long Short-term Memory, a Gated Recurrent Unit, and a Hidden Markov Model.
arXiv Detail & Related papers (2021-07-01T13:13:24Z) - On the Evaluation of Sequential Machine Learning for Network Intrusion
Detection [3.093890460224435]
We propose a detailed methodology to extract temporal sequences of NetFlows that denote patterns of malicious activities.
We then apply this methodology to compare the efficacy of sequential learning models against traditional static learning models.
arXiv Detail & Related papers (2021-06-15T08:29:28Z) - Contextual HyperNetworks for Novel Feature Adaptation [43.49619456740745]
Contextual HyperNetwork (CHN) generates parameters for extending the base model to a new feature.
At prediction time, the CHN requires only a single forward pass through a neural network, yielding a significant speed-up.
We show that this system obtains improved few-shot learning performance for novel features over existing imputation and meta-learning baselines.
arXiv Detail & Related papers (2021-04-12T23:19:49Z) - Reinforcement Learning of Graph Neural Networks for Service Function
Chaining [3.9373541926236766]
Service function chaining (SFC) modules play an important role by generating efficient paths for network traffic through physical servers.
Previous supervised learning method demonstrated that the network features can be represented by graph neural networks (GNNs) for the SFC task.
In this paper, we apply reinforcement learning methods for training models on various network topologies with unlabeled data.
arXiv Detail & Related papers (2020-11-17T03:50:53Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.