Multi-agent Reinforcement Learning-based Network Intrusion Detection System
- URL: http://arxiv.org/abs/2407.05766v1
- Date: Mon, 8 Jul 2024 09:18:59 GMT
- Title: Multi-agent Reinforcement Learning-based Network Intrusion Detection System
- Authors: Amine Tellache, Amdjed Mokhtari, Abdelaziz Amara Korba, Yacine Ghamri-Doudane,
- Abstract summary: Intrusion Detection Systems (IDS) play a crucial role in ensuring the security of computer networks.
We propose a novel multi-agent reinforcement learning (RL) architecture, enabling automatic, efficient, and robust network intrusion detection.
Our solution introduces a resilient architecture designed to accommodate the addition of new attacks and effectively adapt to changes in existing attack patterns.
- Score: 3.4636217357968904
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Intrusion Detection Systems (IDS) play a crucial role in ensuring the security of computer networks. Machine learning has emerged as a popular approach for intrusion detection due to its ability to analyze and detect patterns in large volumes of data. However, current ML-based IDS solutions often struggle to keep pace with the ever-changing nature of attack patterns and the emergence of new attack types. Additionally, these solutions face challenges related to class imbalance, where the number of instances belonging to different classes (normal and intrusions) is significantly imbalanced, which hinders their ability to effectively detect minor classes. In this paper, we propose a novel multi-agent reinforcement learning (RL) architecture, enabling automatic, efficient, and robust network intrusion detection. To enhance the capabilities of the proposed model, we have improved the DQN algorithm by implementing the weighted mean square loss function and employing cost-sensitive learning techniques. Our solution introduces a resilient architecture designed to accommodate the addition of new attacks and effectively adapt to changes in existing attack patterns. Experimental results realized using CIC-IDS-2017 dataset, demonstrate that our approach can effectively handle the class imbalance problem and provide a fine grained classification of attacks with a very low false positive rate. In comparison to the current state-of-the-art works, our solution demonstrates a significant superiority in both detection rate and false positive rate.
Related papers
- FaultGuard: A Generative Approach to Resilient Fault Prediction in Smart Electrical Grids [53.2306792009435]
FaultGuard is the first framework for fault type and zone classification resilient to adversarial attacks.
We propose a low-complexity fault prediction model and an online adversarial training technique to enhance robustness.
Our model outclasses the state-of-the-art for resilient fault prediction benchmarking, with an accuracy of up to 0.958.
arXiv Detail & Related papers (2024-03-26T08:51:23Z) - Analyzing Adversarial Inputs in Deep Reinforcement Learning [53.3760591018817]
We present a comprehensive analysis of the characterization of adversarial inputs, through the lens of formal verification.
We introduce a novel metric, the Adversarial Rate, to classify models based on their susceptibility to such perturbations.
Our analysis empirically demonstrates how adversarial inputs can affect the safety of a given DRL system with respect to such perturbations.
arXiv Detail & Related papers (2024-02-07T21:58:40Z) - Few-shot Weakly-supervised Cybersecurity Anomaly Detection [1.179179628317559]
We propose an enhancement to an existing few-shot weakly-supervised deep learning anomaly detection framework.
This framework incorporates data augmentation, representation learning and ordinal regression.
We then evaluated and showed the performance of our implemented framework on three benchmark datasets.
arXiv Detail & Related papers (2023-04-15T04:37:54Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Improving robustness of jet tagging algorithms with adversarial training [56.79800815519762]
We investigate the vulnerability of flavor tagging algorithms via application of adversarial attacks.
We present an adversarial training strategy that mitigates the impact of such simulated attacks.
arXiv Detail & Related papers (2022-03-25T19:57:19Z) - Deep Q-Learning based Reinforcement Learning Approach for Network
Intrusion Detection [1.7205106391379026]
We introduce a new generation of network intrusion detection methods that combines a Q-learning-based reinforcement learning with a deep-feed forward neural network method for network intrusion detection.
Our proposed Deep Q-Learning (DQL) model provides an ongoing auto-learning capability for a network environment.
Our experimental results show that our proposed DQL is highly effective in detecting different intrusion classes and outperforms other similar machine learning approaches.
arXiv Detail & Related papers (2021-11-27T20:18:00Z) - Learning to Detect: A Data-driven Approach for Network Intrusion
Detection [17.288512506016612]
We perform a comprehensive study on NSL-KDD, a network traffic dataset, by visualizing patterns and employing different learning-based models to detect cyber attacks.
Unlike previous shallow learning and deep learning models that use the single learning model approach for intrusion detection, we adopt a hierarchy strategy.
We demonstrate the advantage of the unsupervised representation learning model in binary intrusion detection tasks.
arXiv Detail & Related papers (2021-08-18T21:19:26Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Detection of Insider Attacks in Distributed Projected Subgradient
Algorithms [11.096339082411882]
We show that a general neural network is particularly suitable for detecting and localizing malicious agents.
We propose to adopt one of the state-of-art approaches in federated learning, i.e., a collaborative peer-to-peer machine learning protocol.
In our simulations, a least-squared problem is considered to verify the feasibility and effectiveness of AI-based methods.
arXiv Detail & Related papers (2021-01-18T08:01:06Z) - A Hamiltonian Monte Carlo Method for Probabilistic Adversarial Attack
and Learning [122.49765136434353]
We present an effective method, called Hamiltonian Monte Carlo with Accumulated Momentum (HMCAM), aiming to generate a sequence of adversarial examples.
We also propose a new generative method called Contrastive Adversarial Training (CAT), which approaches equilibrium distribution of adversarial examples.
Both quantitative and qualitative analysis on several natural image datasets and practical systems have confirmed the superiority of the proposed algorithm.
arXiv Detail & Related papers (2020-10-15T16:07:26Z) - A cognitive based Intrusion detection system [0.0]
Intrusion detection is one of the important mechanisms that provide computer networks security.
This paper proposes a new approach based on Deep Neural Network ans Support vector machine classifier.
The proposed model predicts the attacks with better accuracy for intrusion detection rather similar methods.
arXiv Detail & Related papers (2020-05-19T13:30:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.