Tutorial on Course-of-Action (COA) Attack Search Methods in Computer
Networks
- URL: http://arxiv.org/abs/2205.13763v1
- Date: Fri, 27 May 2022 05:37:07 GMT
- Title: Tutorial on Course-of-Action (COA) Attack Search Methods in Computer
Networks
- Authors: Seok Bin Son, Soohyun Park, Haemin Lee, Joongheon Kim, Soyi Jung, and
Donghwa Kim
- Abstract summary: As the network size grows, the traditional course-of-action (COA) attack search methods can suffer from the limitations to computing and communication resources.
reinforcement learning (RL)-based intelligent algorithms are one of the most effective solutions.
- Score: 8.78504593920219
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In the literature of modern network security research, deriving effective and
efficient course-of-action (COA) attach search methods are of interests in
industry and academia. As the network size grows, the traditional COA attack
search methods can suffer from the limitations to computing and communication
resources. Therefore, various methods have been developed to solve these
problems, and reinforcement learning (RL)-based intelligent algorithms are one
of the most effective solutions. Therefore, we review the RL-based COA attack
search methods for network attack scenarios in terms of the trends and their
contrib
Related papers
- Deep Reinforcement Learning for Autonomous Cyber Operations: A Survey [0.0]
The rapid increase in the number of cyber-attacks in recent years raises the need for principled methods for defending networks against malicious actors.
Deep reinforcement learning has emerged as a promising approach for mitigating these attacks.
While DRL has shown much potential for cyber-defence, numerous challenges must be overcome before DRL can be applied to autonomous cyber-operations at scale.
arXiv Detail & Related papers (2023-10-11T16:24:14Z) - Adversarial Attacks and Defenses in Machine Learning-Powered Networks: A
Contemporary Survey [114.17568992164303]
Adrial attacks and defenses in machine learning and deep neural network have been gaining significant attention.
This survey provides a comprehensive overview of the recent advancements in the field of adversarial attack and defense techniques.
New avenues of attack are also explored, including search-based, decision-based, drop-based, and physical-world attacks.
arXiv Detail & Related papers (2023-03-11T04:19:31Z) - MARLIN: Soft Actor-Critic based Reinforcement Learning for Congestion
Control in Real Networks [63.24965775030673]
We propose a novel Reinforcement Learning (RL) approach to design generic Congestion Control (CC) algorithms.
Our solution, MARLIN, uses the Soft Actor-Critic algorithm to maximize both entropy and return.
We trained MARLIN on a real network with varying background traffic patterns to overcome the sim-to-real mismatch.
arXiv Detail & Related papers (2023-02-02T18:27:20Z) - Spatio-Temporal Attack Course-of-Action (COA) Search Learning for
Scalable and Time-Varying Networks [9.431571135358649]
One of the key topics in network security research is the autonomous COA attack search method.
New autonomous COA techniques are being developed, and among them, an intelligent spatial algorithm is designed in this paper.
We propose atemporal attack COA search algorithm for scalable and time-varying networks.
arXiv Detail & Related papers (2022-09-02T07:45:40Z) - Meta Reinforcement Learning with Successor Feature Based Context [51.35452583759734]
We propose a novel meta-RL approach that achieves competitive performance comparing to existing meta-RL algorithms.
Our method does not only learn high-quality policies for multiple tasks simultaneously but also can quickly adapt to new tasks with a small amount of training.
arXiv Detail & Related papers (2022-07-29T14:52:47Z) - A Survey on Offline Reinforcement Learning: Taxonomy, Review, and Open
Problems [0.0]
Reinforcement learning (RL) has experienced a dramatic increase in popularity.
There is still a wide range of domains inaccessible to RL due to the high cost and danger of interacting with the environment.
offline RL is a paradigm that learns exclusively from static datasets of previously collected interactions.
arXiv Detail & Related papers (2022-03-02T20:05:11Z) - Improving Robustness of Reinforcement Learning for Power System Control
with Adversarial Training [71.7750435554693]
We show that several state-of-the-art RL agents proposed for power system control are vulnerable to adversarial attacks.
Specifically, we use an adversary Markov Decision Process to learn an attack policy, and demonstrate the potency of our attack.
We propose to use adversarial training to increase the robustness of RL agent against attacks and avoid infeasible operational decisions.
arXiv Detail & Related papers (2021-10-18T00:50:34Z) - Improved Context-Based Offline Meta-RL with Attention and Contrastive
Learning [1.3106063755117399]
We improve upon one of the SOTA OMRL algorithms, FOCAL, by incorporating intra-task attention mechanism and inter-task contrastive learning objectives.
Theoretical analysis and experiments are presented to demonstrate the superior performance, efficiency and robustness of our end-to-end and model free method.
arXiv Detail & Related papers (2021-02-22T05:05:16Z) - Rethink AI-based Power Grid Control: Diving Into Algorithm Design [6.194042945960622]
In this paper, we present an in-depth analysis of DRL-based voltage control fromaspects of algorithm selection, state space representation, and reward engineering.
We propose a novel imitation learning-based approachto directly map power grid operating points to effective actions without any interimreinforcement learning process.
arXiv Detail & Related papers (2020-12-23T23:38:41Z) - Reannealing of Decaying Exploration Based On Heuristic Measure in Deep
Q-Network [82.20059754270302]
We propose an algorithm based on the idea of reannealing, that aims at encouraging exploration only when it is needed.
We perform an illustrative case study showing that it has potential to both accelerate training and obtain a better policy.
arXiv Detail & Related papers (2020-09-29T20:40:00Z) - CATCH: Context-based Meta Reinforcement Learning for Transferrable
Architecture Search [102.67142711824748]
CATCH is a novel Context-bAsed meTa reinforcement learning algorithm for transferrable arChitecture searcH.
The combination of meta-learning and RL allows CATCH to efficiently adapt to new tasks while being agnostic to search spaces.
It is also capable of handling cross-domain architecture search as competitive networks on ImageNet, COCO, and Cityscapes are identified.
arXiv Detail & Related papers (2020-07-18T09:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.