NBcoded: network attack classifiers based on Encoder and Naive Bayes
model for resource limited devices
- URL: http://arxiv.org/abs/2109.07273v1
- Date: Wed, 15 Sep 2021 13:21:23 GMT
- Title: NBcoded: network attack classifiers based on Encoder and Naive Bayes
model for resource limited devices
- Authors: Lander Segurola-Gil, Francesco Zola, Xabier Echeberria-Barrio and Raul
Orduna-Urrutia
- Abstract summary: NBcoded is a novel light attack classification tool.
This work compares three different NBcoded implementations based on three different Naive Bayes likelihood distribution assumptions.
Our implementation shows to be the best model reducing the impact of training time and disk usage, even if it is outperformed by the other two in terms of Accuracy and F1-score ( 2%)
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the recent years, cybersecurity has gained high relevance, converting the
detection of attacks or intrusions into a key task. In fact, a small breach in
a system, application, or network, can cause huge damage for the companies.
However, when this attack detection encounters the Artificial Intelligence
paradigm, it can be addressed using high-quality classifiers which often need
high resource demands in terms of computation or memory usage. This situation
has a high impact when the attack classifiers need to be used with limited
resourced devices or without overloading the performance of the devices, as it
happens for example in IoT devices, or in industrial systems. For overcoming
this issue, NBcoded, a novel light attack classification tool is proposed in
this work. NBcoded works in a pipeline combining the removal of noisy data
properties of the encoders with the low resources and timing consuming obtained
by the Naive Bayes classifier. This work compares three different NBcoded
implementations based on three different Naive Bayes likelihood distribution
assumptions (Gaussian, Complement and Bernoulli). Then, the best NBcoded is
compared with state of the art classifiers like Multilayer Perceptron and
Random Forest. Our implementation shows to be the best model reducing the
impact of training time and disk usage, even if it is outperformed by the other
two in terms of Accuracy and F1-score (~ 2%).
Related papers
- Forging the Forger: An Attempt to Improve Authorship Verification via Data Augmentation [52.72682366640554]
Authorship Verification (AV) is a text classification task concerned with inferring whether a candidate text has been written by one specific author or by someone else.
It has been shown that many AV systems are vulnerable to adversarial attacks, where a malicious author actively tries to fool the classifier by either concealing their writing style, or by imitating the style of another author.
arXiv Detail & Related papers (2024-03-17T16:36:26Z) - A Black-Box Attack on Code Models via Representation Nearest Neighbor
Search [38.09283133342118]
Our proposed approach, RNNS, uses a search seed based on historical attacks to find potential adversarial substitutes.
Based on the vector representation, RNNS predicts and selects better substitutes for attacks.
arXiv Detail & Related papers (2023-05-10T04:58:39Z) - Enhancing Multiple Reliability Measures via Nuisance-extended
Information Bottleneck [77.37409441129995]
In practical scenarios where training data is limited, many predictive signals in the data can be rather from some biases in data acquisition.
We consider an adversarial threat model under a mutual information constraint to cover a wider class of perturbations in training.
We propose an autoencoder-based training to implement the objective, as well as practical encoder designs to facilitate the proposed hybrid discriminative-generative training.
arXiv Detail & Related papers (2023-03-24T16:03:21Z) - DOC-NAD: A Hybrid Deep One-class Classifier for Network Anomaly
Detection [0.0]
Machine Learning approaches have been used to enhance the detection capabilities of Network Intrusion Detection Systems (NIDSs)
Recent work has achieved near-perfect performance by following binary- and multi-class network anomaly detection tasks.
This paper proposes a Deep One-Class (DOC) classifier for network intrusion detection by only training on benign network data samples.
arXiv Detail & Related papers (2022-12-15T00:08:05Z) - Boosting the Discriminant Power of Naive Bayes [17.43377106246301]
We propose a feature augmentation method employing a stack auto-encoder to reduce the noise in the data and boost the discriminant power of naive Bayes.
The experimental results show that the proposed method significantly and consistently outperforms the state-of-the-art naive Bayes classifiers.
arXiv Detail & Related papers (2022-09-20T08:02:54Z) - Discrete Key-Value Bottleneck [95.61236311369821]
Deep neural networks perform well on classification tasks where data streams are i.i.d. and labeled data is abundant.
One powerful approach that has addressed this challenge involves pre-training of large encoders on volumes of readily available data, followed by task-specific tuning.
Given a new task, however, updating the weights of these encoders is challenging as a large number of weights needs to be fine-tuned, and as a result, they forget information about the previous tasks.
We propose a model architecture to address this issue, building upon a discrete bottleneck containing pairs of separate and learnable key-value codes.
arXiv Detail & Related papers (2022-07-22T17:52:30Z) - NetSentry: A Deep Learning Approach to Detecting Incipient Large-scale
Network Attacks [9.194664029847019]
We show how to use Machine Learning for Network Intrusion Detection (NID) in a principled way.
We propose NetSentry, perhaps the first of its kind NIDS that builds on Bi-ALSTM, an original ensemble of sequential neural models.
We demonstrate F1 score gains above 33% over the state-of-the-art, as well as up to 3 times higher rates of detecting attacks such as XSS and web bruteforce.
arXiv Detail & Related papers (2022-02-20T17:41:02Z) - Discriminator-Free Generative Adversarial Attack [87.71852388383242]
Agenerative-based adversarial attacks can get rid of this limitation.
ASymmetric Saliency-based Auto-Encoder (SSAE) generates the perturbations.
The adversarial examples generated by SSAE not only make thewidely-used models collapse, but also achieves good visual quality.
arXiv Detail & Related papers (2021-07-20T01:55:21Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - DAAIN: Detection of Anomalous and Adversarial Input using Normalizing
Flows [52.31831255787147]
We introduce a novel technique, DAAIN, to detect out-of-distribution (OOD) inputs and adversarial attacks (AA)
Our approach monitors the inner workings of a neural network and learns a density estimator of the activation distribution.
Our model can be trained on a single GPU making it compute efficient and deployable without requiring specialized accelerators.
arXiv Detail & Related papers (2021-05-30T22:07:13Z) - An Experimental Analysis of Attack Classification Using Machine Learning
in IoT Networks [3.9236397589917127]
In recent years, there has been a massive increase in the amount of Internet of Things (IoT) devices as well as the data generated by such devices.
As the number of attacks possible on a network increases, it becomes more difficult for traditional intrusion detection systems to cope with these attacks efficiently.
In this paper, we highlight several machine learning (ML) methods such as k-nearest neighbour (KNN), support vector machine (SVM), decision tree (DT), naive Bayes (NB), random forest (RF), artificial neural network (ANN), and logistic regression (LR) that can be used in IDS
arXiv Detail & Related papers (2021-01-10T11:48:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.