DI-NIDS: Domain Invariant Network Intrusion Detection System
- URL: http://arxiv.org/abs/2210.08252v1
- Date: Sat, 15 Oct 2022 10:26:22 GMT
- Title: DI-NIDS: Domain Invariant Network Intrusion Detection System
- Authors: Siamak Layeghy, Mahsa Baktashmotlagh, Marius Portmann
- Abstract summary: In various applications, such as computer vision, domain adaptation techniques have been successful.
In the case of network intrusion detection however, the state-of-the-art domain adaptation approaches have had limited success.
We propose to extract domain invariant features using adversarial domain adaptation from multiple network domains.
- Score: 9.481792073140204
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The performance of machine learning based network intrusion detection systems
(NIDSs) severely degrades when deployed on a network with significantly
different feature distributions from the ones of the training dataset. In
various applications, such as computer vision, domain adaptation techniques
have been successful in mitigating the gap between the distributions of the
training and test data. In the case of network intrusion detection however, the
state-of-the-art domain adaptation approaches have had limited success.
According to recent studies, as well as our own results, the performance of an
NIDS considerably deteriorates when the `unseen' test dataset does not follow
the training dataset distribution. In some cases, swapping the train and test
datasets makes this even more severe. In order to enhance the generalisibility
of machine learning based network intrusion detection systems, we propose to
extract domain invariant features using adversarial domain adaptation from
multiple network domains, and then apply an unsupervised technique for
recognising abnormalities, i.e., intrusions. More specifically, we train a
domain adversarial neural network on labelled source domains, extract the
domain invariant features, and train a One-Class SVM (OSVM) model to detect
anomalies. At test time, we feedforward the unlabeled test data to the feature
extractor network to project it into a domain invariant space, and then apply
OSVM on the extracted features to achieve our final goal of detecting
intrusions. Our extensive experiments on the NIDS benchmark datasets of
NFv2-CIC-2018 and NFv2-UNSW-NB15 show that our proposed setup demonstrates
superior cross-domain performance in comparison to the previous approaches.
Related papers
- On the Domain Generalizability of RF Fingerprints Through Multifractal Dimension Representation [6.05147450902935]
RF data-driven device fingerprinting through the use of deep learning has recently surfaced as a possible method for enabling secure device identification and authentication.
Traditional approaches are commonly susceptible to the domain adaptation problem where a model trained on data collected under one domain performs badly when tested on data collected under a different domain.
In this work, we propose using multifractal analysis and the variance fractal dimension trajectory (VFDT) as a data representation input to the deep neural network to extract device fingerprints that are domain generalizable.
arXiv Detail & Related papers (2024-02-15T16:07:35Z) - Generalizable Metric Network for Cross-domain Person Re-identification [55.71632958027289]
Cross-domain (i.e., domain generalization) scene presents a challenge in Re-ID tasks.
Most existing methods aim to learn domain-invariant or robust features for all domains.
We propose a Generalizable Metric Network (GMN) to explore sample similarity in the sample-pair space.
arXiv Detail & Related papers (2023-06-21T03:05:25Z) - Deep Unsupervised Domain Adaptation: A Review of Recent Advances and
Perspectives [16.68091981866261]
Unsupervised domain adaptation (UDA) is proposed to counter the performance drop on data in a target domain.
UDA has yielded promising results on natural image processing, video analysis, natural language processing, time-series data analysis, medical image analysis, etc.
arXiv Detail & Related papers (2022-08-15T20:05:07Z) - Domain-Invariant Proposals based on a Balanced Domain Classifier for
Object Detection [8.583307102907295]
Object recognition from images means to automatically find object(s) of interest and to return their category and location information.
Benefiting from research on deep learning, like convolutional neural networks(CNNs) and generative adversarial networks, the performance in this field has been improved significantly.
mismatching distributions, i.e., domain shifts, lead to a significant performance drop.
arXiv Detail & Related papers (2022-02-12T00:21:27Z) - Decompose to Adapt: Cross-domain Object Detection via Feature
Disentanglement [79.2994130944482]
We design a Domain Disentanglement Faster-RCNN (DDF) to eliminate the source-specific information in the features for detection task learning.
Our DDF method facilitates the feature disentanglement at the global and local stages, with a Global Triplet Disentanglement (GTD) module and an Instance Similarity Disentanglement (ISD) module.
By outperforming state-of-the-art methods on four benchmark UDA object detection tasks, our DDF method is demonstrated to be effective with wide applicability.
arXiv Detail & Related papers (2022-01-06T05:43:01Z) - Attentive Prototypes for Source-free Unsupervised Domain Adaptive 3D
Object Detection [85.11649974840758]
3D object detection networks tend to be biased towards the data they are trained on.
We propose a single-frame approach for source-free, unsupervised domain adaptation of lidar-based 3D object detectors.
arXiv Detail & Related papers (2021-11-30T18:42:42Z) - CADA: Multi-scale Collaborative Adversarial Domain Adaptation for
Unsupervised Optic Disc and Cup Segmentation [3.587294308501889]
We propose a novel unsupervised domain adaptation framework, called Collaborative Adrial Domain Adaptation (CADA)
Our proposed CADA is an interactive paradigm that presents an exquisite collaborative adaptation through both adversarial learning and ensembling weights at different network layers.
We show that our CADA model incorporating multi-scale input training can overcome performance degradation and outperform state-of-the-art domain adaptation methods.
arXiv Detail & Related papers (2021-10-05T23:44:26Z) - AFAN: Augmented Feature Alignment Network for Cross-Domain Object
Detection [90.18752912204778]
Unsupervised domain adaptation for object detection is a challenging problem with many real-world applications.
We propose a novel augmented feature alignment network (AFAN) which integrates intermediate domain image generation and domain-adversarial training.
Our approach significantly outperforms the state-of-the-art methods on standard benchmarks for both similar and dissimilar domain adaptations.
arXiv Detail & Related papers (2021-06-10T05:01:20Z) - Anomaly Detection on Attributed Networks via Contrastive Self-Supervised
Learning [50.24174211654775]
We present a novel contrastive self-supervised learning framework for anomaly detection on attributed networks.
Our framework fully exploits the local information from network data by sampling a novel type of contrastive instance pair.
A graph neural network-based contrastive learning model is proposed to learn informative embedding from high-dimensional attributes and local structure.
arXiv Detail & Related papers (2021-02-27T03:17:20Z) - Collaborative Training between Region Proposal Localization and
Classification for Domain Adaptive Object Detection [121.28769542994664]
Domain adaptation for object detection tries to adapt the detector from labeled datasets to unlabeled ones for better performance.
In this paper, we are the first to reveal that the region proposal network (RPN) and region proposal classifier(RPC) demonstrate significantly different transferability when facing large domain gap.
arXiv Detail & Related papers (2020-09-17T07:39:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.