CyGIL: A Cyber Gym for Training Autonomous Agents over Emulated Network
Systems
- URL: http://arxiv.org/abs/2109.03331v1
- Date: Tue, 7 Sep 2021 20:52:44 GMT
- Title: CyGIL: A Cyber Gym for Training Autonomous Agents over Emulated Network
Systems
- Authors: Li Li, Raed Fayad, Adrian Taylor
- Abstract summary: CyGIL is an experimental testbed of an emulated RL training environment for network cyber operations.
It uses a stateless environment architecture and incorporates the MITRE ATT&CK framework to establish a high fidelity training environment.
Its comprehensive action space and flexible game design allow the agent training to focus on particular advanced persistent threat (APT) profiles.
- Score: 3.2550963598419957
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Given the success of reinforcement learning (RL) in various domains, it is
promising to explore the application of its methods to the development of
intelligent and autonomous cyber agents. Enabling this development requires a
representative RL training environment. To that end, this work presents CyGIL:
an experimental testbed of an emulated RL training environment for network
cyber operations. CyGIL uses a stateless environment architecture and
incorporates the MITRE ATT&CK framework to establish a high fidelity training
environment, while presenting a sufficiently abstracted interface to enable RL
training. Its comprehensive action space and flexible game design allow the
agent training to focus on particular advanced persistent threat (APT)
profiles, and to incorporate a broad range of potential threats and
vulnerabilities. By striking a balance between fidelity and simplicity, it aims
to leverage state of the art RL algorithms for application to real-world cyber
defence.
Related papers
- Aquatic Navigation: A Challenging Benchmark for Deep Reinforcement Learning [53.3760591018817]
We propose a new benchmarking environment for aquatic navigation using recent advances in the integration between game engines and Deep Reinforcement Learning.
Specifically, we focus on PPO, one of the most widely accepted algorithms, and we propose advanced training techniques.
Our empirical evaluation shows that a well-designed combination of these ingredients can achieve promising results.
arXiv Detail & Related papers (2024-05-30T23:20:23Z) - Learning Curricula in Open-Ended Worlds [17.138779075998084]
This thesis develops a class of methods called Unsupervised Environment Design (UED)
Given an environment design space, UED automatically generates an infinite sequence or curriculum of training environments.
The findings in this thesis show that UED autocurricula can produce RL agents exhibiting significantly improved robustness.
arXiv Detail & Related papers (2023-12-03T16:44:00Z) - Towards Autonomous Cyber Operation Agents: Exploring the Red Case [3.805031560408777]
Reinforcement and deep reinforcement learning (RL/DRL) have been applied to develop autonomous agents for cyber network operations (CyOps)
The training environment must simulate CyOps with high fidelity, which the agent aims to learn and accomplish.
A good simulator is hard to achieve due to the extreme complexity of the cyber environment.
arXiv Detail & Related papers (2023-09-05T13:56:31Z) - Enabling A Network AI Gym for Autonomous Cyber Agents [2.789574233231923]
This work aims to enable autonomous agents for network cyber operations (CyOps) by applying reinforcement and deep reinforcement learning (RL/DRL)
The required RL training environment is particularly challenging, as it must balance the need for high-fidelity, best achieved through real network emulation, with the need for running large numbers of training episodes, best achieved using simulation.
A unified training environment namely the Cyber Gym for Intelligent Learning (CyGIL) is developed where an emulated CyGIL-E automatically generates a simulated CyGIL-S.
arXiv Detail & Related papers (2023-04-03T20:47:03Z) - Unified Emulation-Simulation Training Environment for Autonomous Cyber
Agents [2.6001628868861504]
This work presents a solution to automatically generate a high-fidelity simulator in the Cyber Gym for Intelligent Learning (CyGIL)
CyGIL provides a unified CyOp training environment where an emulated CyGIL-E automatically generates a simulated CyGIL-S.
The simulator generation is integrated with the agent training process to further reduce the required agent training time.
arXiv Detail & Related papers (2023-04-03T15:00:32Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Constrained Reinforcement Learning for Robotics via Scenario-Based
Programming [64.07167316957533]
It is crucial to optimize the performance of DRL-based agents while providing guarantees about their behavior.
This paper presents a novel technique for incorporating domain-expert knowledge into a constrained DRL training loop.
Our experiments demonstrate that using our approach to leverage expert knowledge dramatically improves the safety and the performance of the agent.
arXiv Detail & Related papers (2022-06-20T07:19:38Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Scenic4RL: Programmatic Modeling and Generation of Reinforcement
Learning Environments [89.04823188871906]
Generation of diverse realistic scenarios is challenging for real-time strategy (RTS) environments.
Most of the existing simulators rely on randomly generating the environments.
We introduce the benefits of adopting an existing formal scenario specification language, SCENIC, to assist researchers.
arXiv Detail & Related papers (2021-06-18T21:49:46Z) - The NetHack Learning Environment [79.06395964379107]
We present the NetHack Learning Environment (NLE), a procedurally generated rogue-like environment for Reinforcement Learning research.
We argue that NetHack is sufficiently complex to drive long-term research on problems such as exploration, planning, skill acquisition, and language-conditioned RL.
We demonstrate empirical success for early stages of the game using a distributed Deep RL baseline and Random Network Distillation exploration.
arXiv Detail & Related papers (2020-06-24T14:12:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.