An Adversarial Robustness Benchmark for Enterprise Network Intrusion
Detection
- URL: http://arxiv.org/abs/2402.16912v1
- Date: Sun, 25 Feb 2024 16:45:39 GMT
- Title: An Adversarial Robustness Benchmark for Enterprise Network Intrusion
Detection
- Authors: Jo\~ao Vitorino, Miguel Silva, Eva Maia, Isabel Pra\c{c}a
- Abstract summary: The robustness of regularly and adversarially trained RF, XGB, LGBM, and EBM models was evaluated.
NewCICIDS led to models with a better performance, especially XGB and EBM, but RF and LGBM were less robust against the more recent cyber-attacks of HIKARI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: As cyber-attacks become more sophisticated, improving the robustness of
Machine Learning (ML) models must be a priority for enterprises of all sizes.
To reliably compare the robustness of different ML models for cyber-attack
detection in enterprise computer networks, they must be evaluated in
standardized conditions. This work presents a methodical adversarial robustness
benchmark of multiple decision tree ensembles with constrained adversarial
examples generated from standard datasets. The robustness of regularly and
adversarially trained RF, XGB, LGBM, and EBM models was evaluated on the
original CICIDS2017 dataset, a corrected version of it designated as NewCICIDS,
and the HIKARI dataset, which contains more recent network traffic. NewCICIDS
led to models with a better performance, especially XGB and EBM, but RF and
LGBM were less robust against the more recent cyber-attacks of HIKARI. Overall,
the robustness of the models to adversarial cyber-attack examples was improved
without their generalization to regular traffic being affected, enabling a
reliable detection of suspicious activity without costly increases of false
alarms.
Related papers
- Extending Network Intrusion Detection with Enhanced Particle Swarm Optimization Techniques [0.0]
The present research investigates how to improve Network Intrusion Detection Systems (NIDS) by combining Machine Learning (ML) and Deep Learning (DL) techniques.
The study uses the CSE-CIC-IDS 2018 and LITNET-2020 datasets to compare ML methods (Decision Trees, Random Forest, XGBoost) and DL models (CNNs, RNNs, DNNs) against key performance metrics.
The Decision Tree model performed better across all measures after being fine-tuned with Enhanced Particle Swarm Optimization (EPSO), demonstrating the model's ability to detect network breaches effectively.
arXiv Detail & Related papers (2024-08-14T17:11:36Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.
The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.
We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Reliable Feature Selection for Adversarially Robust Cyber-Attack Detection [0.0]
This work presents a feature selection and consensus process that combines multiple methods and applies them to several network datasets.
By using an improved dataset with more data diversity, selecting the best time-related features and a more specific feature set, and performing adversarial training, the ML models were able to achieve a better adversarially robust generalization.
arXiv Detail & Related papers (2024-04-05T16:01:21Z) - The Effectiveness of Random Forgetting for Robust Generalization [21.163070161951868]
We introduce a novel learning paradigm called "Forget to Mitigate Overfitting" (FOMO)
FOMO alternates between the forgetting phase, which randomly forgets a subset of weights, and the relearning phase, which emphasizes learning generalizable features.
Our experiments show that FOMO alleviates robust overfitting by significantly reducing the gap between the best and last robust test accuracy.
arXiv Detail & Related papers (2024-02-18T23:14:40Z) - Model Stealing Attack against Graph Classification with Authenticity, Uncertainty and Diversity [80.16488817177182]
GNNs are vulnerable to the model stealing attack, a nefarious endeavor geared towards duplicating the target model via query permissions.
We introduce three model stealing attacks to adapt to different actual scenarios.
arXiv Detail & Related papers (2023-12-18T05:42:31Z) - CausalAgents: A Robustness Benchmark for Motion Forecasting using Causal
Relationships [8.679073301435265]
We construct a new benchmark for evaluating and improving model robustness by applying perturbations to existing data.
We use these labels to perturb the data by deleting non-causal agents from the scene.
Under non-causal perturbations, we observe a $25$-$38%$ relative change in minADE as compared to the original.
arXiv Detail & Related papers (2022-07-07T21:28:23Z) - From Environmental Sound Representation to Robustness of 2D CNN Models
Against Adversarial Attacks [82.21746840893658]
This paper investigates the impact of different standard environmental sound representations (spectrograms) on the recognition performance and adversarial attack robustness of a victim residual convolutional neural network.
We show that while the ResNet-18 model trained on DWT spectrograms achieves a high recognition accuracy, attacking this model is relatively more costly for the adversary.
arXiv Detail & Related papers (2022-04-14T15:14:08Z) - Interpolated Joint Space Adversarial Training for Robust and
Generalizable Defenses [82.3052187788609]
Adversarial training (AT) is considered to be one of the most reliable defenses against adversarial attacks.
Recent works show generalization improvement with adversarial samples under novel threat models.
We propose a novel threat model called Joint Space Threat Model (JSTM)
Under JSTM, we develop novel adversarial attacks and defenses.
arXiv Detail & Related papers (2021-12-12T21:08:14Z) - Firearm Detection via Convolutional Neural Networks: Comparing a
Semantic Segmentation Model Against End-to-End Solutions [68.8204255655161]
Threat detection of weapons and aggressive behavior from live video can be used for rapid detection and prevention of potentially deadly incidents.
One way for achieving this is through the use of artificial intelligence and, in particular, machine learning for image analysis.
We compare a traditional monolithic end-to-end deep learning model and a previously proposed model based on an ensemble of simpler neural networks detecting fire-weapons via semantic segmentation.
arXiv Detail & Related papers (2020-12-17T15:19:29Z) - Boosting Black-Box Attack with Partially Transferred Conditional
Adversarial Distribution [83.02632136860976]
We study black-box adversarial attacks against deep neural networks (DNNs)
We develop a novel mechanism of adversarial transferability, which is robust to the surrogate biases.
Experiments on benchmark datasets and attacking against real-world API demonstrate the superior attack performance of the proposed method.
arXiv Detail & Related papers (2020-06-15T16:45:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.