Multidomain transformer-based deep learning for early detection of
network intrusion
- URL: http://arxiv.org/abs/2309.01070v1
- Date: Sun, 3 Sep 2023 04:18:08 GMT
- Title: Multidomain transformer-based deep learning for early detection of
network intrusion
- Authors: Jinxin Liu, Murat Simsek, Michele Nogueira, Burak Kantarci
- Abstract summary: Timely response of Network Intrusion Detection Systems (NIDS) is constrained by the flow generation process.
We propose a novel feature extractor, Time Series Network Flow Meter (TS-NFM), that represents network flow as MTS with explainable features.
A new deep learning-based early detection model called Multi-Domain Transformer (MDT) is proposed, which incorporates the frequency domain into Transformer.
- Score: 15.99260480348544
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Timely response of Network Intrusion Detection Systems (NIDS) is constrained
by the flow generation process which requires accumulation of network packets.
This paper introduces Multivariate Time Series (MTS) early detection into NIDS
to identify malicious flows prior to their arrival at target systems. With this
in mind, we first propose a novel feature extractor, Time Series Network Flow
Meter (TS-NFM), that represents network flow as MTS with explainable features,
and a new benchmark dataset is created using TS-NFM and the meta-data of
CICIDS2017, called SCVIC-TS-2022. Additionally, a new deep learning-based early
detection model called Multi-Domain Transformer (MDT) is proposed, which
incorporates the frequency domain into Transformer. This work further proposes
a Multi-Domain Multi-Head Attention (MD-MHA) mechanism to improve the ability
of MDT to extract better features. Based on the experimental results, the
proposed methodology improves the earliness of the conventional NIDS (i.e.,
percentage of packets that are used for classification) by 5x10^4 times and
duration-based earliness (i.e., percentage of duration of the classified
packets of a flow) by a factor of 60, resulting in a 84.1% macro F1 score (31%
higher than Transformer) on SCVIC-TS-2022. Additionally, the proposed MDT
outperforms the state-of-the-art early detection methods by 5% and 6% on ECG
and Wafer datasets, respectively.
Related papers
- MCDFN: Supply Chain Demand Forecasting via an Explainable Multi-Channel Data Fusion Network Model [0.0]
We introduce the Multi-Channel Data Fusion Network (MCDFN), a hybrid architecture that integrates CNN, Long Short-Term Memory networks (LSTM), and Gated Recurrent Units (GRU)
Our comparative benchmarking demonstrates that MCDFN outperforms seven other deep-learning models.
This research advances demand forecasting methodologies and offers practical guidelines for integrating MCDFN into supply chain systems.
arXiv Detail & Related papers (2024-05-24T14:30:00Z) - TCCT-Net: Two-Stream Network Architecture for Fast and Efficient Engagement Estimation via Behavioral Feature Signals [58.865901821451295]
We present a novel two-stream feature fusion "Tensor-Convolution and Convolution-Transformer Network" (TCCT-Net) architecture.
To better learn the meaningful patterns in the temporal-spatial domain, we design a "CT" stream that integrates a hybrid convolutional-transformer.
In parallel, to efficiently extract rich patterns from the temporal-frequency domain, we introduce a "TC" stream that uses Continuous Wavelet Transform (CWT) to represent information in a 2D tensor form.
arXiv Detail & Related papers (2024-04-15T06:01:48Z) - DT-DDNN: A Physical Layer Security Attack Detector in 5G RF Domain for
CAVs [11.15939066175832]
jamming attacks pose substantial risks to the 5G network.
This work presents a novel deep learning-based technique for detecting jammers in CAV networks.
Results show that the proposed method achieves 96.4% detection rate in extra low jamming power.
arXiv Detail & Related papers (2024-03-05T04:29:31Z) - ADC/DAC-Free Analog Acceleration of Deep Neural Networks with Frequency
Transformation [2.7488316163114823]
This paper proposes a novel approach to an energy-efficient acceleration of frequency-domain neural networks by utilizing analog-domain frequency-based tensor transformations.
Our approach achieves more compact cells by eliminating the need for trainable parameters in the transformation matrix.
On a 16$times$16 crossbars, for 8-bit input processing, the proposed approach achieves the energy efficiency of 1602 tera operations per second per Watt.
arXiv Detail & Related papers (2023-09-04T19:19:39Z) - A Novel Two Stream Decision Level Fusion of Vision and Inertial Sensors
Data for Automatic Multimodal Human Activity Recognition System [2.5214116139219787]
This paper presents a novel multimodal human activity recognition system.
It uses a two-stream decision level fusion of vision and inertial sensors.
The accuracies obtained by the proposed system are 96.9 %, 97.6 %, 98.7 %, and 95.9 % respectively.
arXiv Detail & Related papers (2023-06-27T19:29:35Z) - Transform Once: Efficient Operator Learning in Frequency Domain [69.74509540521397]
We study deep neural networks designed to harness the structure in frequency domain for efficient learning of long-range correlations in space or time.
This work introduces a blueprint for frequency domain learning through a single transform: transform once (T1)
arXiv Detail & Related papers (2022-11-26T01:56:05Z) - Autoencoder Based Iterative Modeling and Multivariate Time-Series
Subsequence Clustering Algorithm [0.0]
This paper introduces an algorithm for the detection of change-points and the identification of the corresponding subsequences in transient time-series data (MTSD)
We use a recurrent neural network (RNN) based Autoencoder (AE) which is iteratively trained on incoming data.
A model of the identified subsequence is saved and used for recognition of repeating subsequences as well as fast offline clustering.
arXiv Detail & Related papers (2022-09-09T09:59:56Z) - Deep Learning-Based Synchronization for Uplink NB-IoT [72.86843435313048]
We propose a neural network (NN)-based algorithm for device detection and time of arrival (ToA) estimation for the narrowband physical random-access channel (NPRACH) of narrowband internet of things (NB-IoT)
The introduced NN architecture leverages residual convolutional networks as well as knowledge of the preamble structure of the 5G New Radio (5G NR) specifications.
arXiv Detail & Related papers (2022-05-22T12:16:43Z) - Federated Learning for Energy-limited Wireless Networks: A Partial Model
Aggregation Approach [79.59560136273917]
limited communication resources, bandwidth and energy, and data heterogeneity across devices are main bottlenecks for federated learning (FL)
We first devise a novel FL framework with partial model aggregation (PMA)
The proposed PMA-FL improves 2.72% and 11.6% accuracy on two typical heterogeneous datasets.
arXiv Detail & Related papers (2022-04-20T19:09:52Z) - Wake Word Detection with Streaming Transformers [72.66551640048405]
We show that our proposed Transformer model outperforms the baseline convolution network by 25% on average in false rejection rate at the same false alarm rate.
Our experiments on the Mobvoi wake word dataset demonstrate that our proposed Transformer model outperforms the baseline convolution network by 25%.
arXiv Detail & Related papers (2021-02-08T19:14:32Z) - A Transductive Multi-Head Model for Cross-Domain Few-Shot Learning [72.30054522048553]
We present a new method, Transductive Multi-Head Few-Shot learning (TMHFS), to address the Cross-Domain Few-Shot Learning challenge.
The proposed methods greatly outperform the strong baseline, fine-tuning, on four different target domains.
arXiv Detail & Related papers (2020-06-08T02:39:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.