Behaviour-Diverse Automatic Penetration Testing: A Curiosity-Driven
Multi-Objective Deep Reinforcement Learning Approach
- URL: http://arxiv.org/abs/2202.10630v1
- Date: Tue, 22 Feb 2022 02:34:16 GMT
- Title: Behaviour-Diverse Automatic Penetration Testing: A Curiosity-Driven
Multi-Objective Deep Reinforcement Learning Approach
- Authors: Yizhou Yang, Xin Liu
- Abstract summary: Penetration testing plays a critical role in evaluating the security of a target network by emulating real active adversaries.
Deep Reinforcement Learning is seen as a promising solution to automating the process of penetration tests.
We propose a Chebyshev decomposition critic to find diverse adversary strategies that balance different objectives in the penetration test.
- Score: 3.5071575478443435
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Penetration Testing plays a critical role in evaluating the security of a
target network by emulating real active adversaries. Deep Reinforcement
Learning (RL) is seen as a promising solution to automating the process of
penetration tests by reducing human effort and improving reliability. Existing
RL solutions focus on finding a specific attack path to impact the target
hosts. However, in reality, a diverse range of attack variations are needed to
provide comprehensive assessments of the target network's security level.
Hence, the attack agents must consider multiple objectives when penetrating the
network. Nevertheless, this challenge is not adequately addressed in the
existing literature. To this end, we formulate the automatic penetration
testing in the Multi-Objective Reinforcement Learning (MORL) framework and
propose a Chebyshev decomposition critic to find diverse adversary strategies
that balance different objectives in the penetration test. Additionally, the
number of available actions increases with the agent consistently probing the
target network, making the training process intractable in many practical
situations. Thus, we introduce a coverage-based masking mechanism that reduces
attention on previously selected actions to help the agent adapt to future
exploration. Experimental evaluation on a range of scenarios demonstrates the
superiority of our proposed approach when compared to adapted algorithms in
terms of multi-objective learning and performance efficiency.
Related papers
- Multi-agent Reinforcement Learning-based Network Intrusion Detection System [3.4636217357968904]
Intrusion Detection Systems (IDS) play a crucial role in ensuring the security of computer networks.
We propose a novel multi-agent reinforcement learning (RL) architecture, enabling automatic, efficient, and robust network intrusion detection.
Our solution introduces a resilient architecture designed to accommodate the addition of new attacks and effectively adapt to changes in existing attack patterns.
arXiv Detail & Related papers (2024-07-08T09:18:59Z) - Enhancing Robotic Navigation: An Evaluation of Single and
Multi-Objective Reinforcement Learning Strategies [0.9208007322096532]
This study presents a comparative analysis between single-objective and multi-objective reinforcement learning methods for training a robot to navigate effectively to an end goal.
By modifying the reward function to return a vector of rewards, each pertaining to a distinct objective, the robot learns a policy that effectively balances the different goals.
arXiv Detail & Related papers (2023-12-13T08:00:26Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - Downlink Power Allocation in Massive MIMO via Deep Learning: Adversarial
Attacks and Training [62.77129284830945]
This paper considers a regression problem in a wireless setting and shows that adversarial attacks can break the DL-based approach.
We also analyze the effectiveness of adversarial training as a defensive technique in adversarial settings and show that the robustness of DL-based wireless system against attacks improves significantly.
arXiv Detail & Related papers (2022-06-14T04:55:11Z) - Benchmarking Safe Deep Reinforcement Learning in Aquatic Navigation [78.17108227614928]
We propose a benchmark environment for Safe Reinforcement Learning focusing on aquatic navigation.
We consider a value-based and policy-gradient Deep Reinforcement Learning (DRL)
We also propose a verification strategy that checks the behavior of the trained models over a set of desired properties.
arXiv Detail & Related papers (2021-12-16T16:53:56Z) - Adversarial Machine Learning In Network Intrusion Detection Domain: A
Systematic Review [0.0]
It has been found that deep learning models are vulnerable to data instances that can mislead the model to make incorrect classification decisions.
This survey explores the researches that employ different aspects of adversarial machine learning in the area of network intrusion detection.
arXiv Detail & Related papers (2021-12-06T19:10:23Z) - Deep Q-Learning based Reinforcement Learning Approach for Network
Intrusion Detection [1.7205106391379026]
We introduce a new generation of network intrusion detection methods that combines a Q-learning-based reinforcement learning with a deep-feed forward neural network method for network intrusion detection.
Our proposed Deep Q-Learning (DQL) model provides an ongoing auto-learning capability for a network environment.
Our experimental results show that our proposed DQL is highly effective in detecting different intrusion classes and outperforms other similar machine learning approaches.
arXiv Detail & Related papers (2021-11-27T20:18:00Z) - Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial
Robustness [53.094682754683255]
We propose a Model-Agnostic Meta-Attack (MAMA) approach to discover stronger attack algorithms automatically.
Our method learns the in adversarial attacks parameterized by a recurrent neural network.
We develop a model-agnostic training algorithm to improve the ability of the learned when attacking unseen defenses.
arXiv Detail & Related papers (2021-10-13T13:54:24Z) - Increasing the Confidence of Deep Neural Networks by Coverage Analysis [71.57324258813674]
This paper presents a lightweight monitoring architecture based on coverage paradigms to enhance the model against different unsafe inputs.
Experimental results show that the proposed approach is effective in detecting both powerful adversarial examples and out-of-distribution inputs.
arXiv Detail & Related papers (2021-01-28T16:38:26Z) - Automatic Curriculum Learning through Value Disagreement [95.19299356298876]
Continually solving new, unsolved tasks is the key to learning diverse behaviors.
In the multi-task domain, where an agent needs to reach multiple goals, the choice of training goals can largely affect sample efficiency.
We propose setting up an automatic curriculum for goals that the agent needs to solve.
We evaluate our method across 13 multi-goal robotic tasks and 5 navigation tasks, and demonstrate performance gains over current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T03:58:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.