Enabling A Network AI Gym for Autonomous Cyber Agents
- URL: http://arxiv.org/abs/2304.01366v1
- Date: Mon, 3 Apr 2023 20:47:03 GMT
- Title: Enabling A Network AI Gym for Autonomous Cyber Agents
- Authors: Li Li, Jean-Pierre S. El Rami, Adrian Taylor, James Hailing Rao,
Thomas Kunz
- Abstract summary: This work aims to enable autonomous agents for network cyber operations (CyOps) by applying reinforcement and deep reinforcement learning (RL/DRL)
The required RL training environment is particularly challenging, as it must balance the need for high-fidelity, best achieved through real network emulation, with the need for running large numbers of training episodes, best achieved using simulation.
A unified training environment namely the Cyber Gym for Intelligent Learning (CyGIL) is developed where an emulated CyGIL-E automatically generates a simulated CyGIL-S.
- Score: 2.789574233231923
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This work aims to enable autonomous agents for network cyber operations
(CyOps) by applying reinforcement and deep reinforcement learning (RL/DRL). The
required RL training environment is particularly challenging, as it must
balance the need for high-fidelity, best achieved through real network
emulation, with the need for running large numbers of training episodes, best
achieved using simulation. A unified training environment, namely the Cyber Gym
for Intelligent Learning (CyGIL) is developed where an emulated CyGIL-E
automatically generates a simulated CyGIL-S. From preliminary experimental
results, CyGIL-S is capable to train agents in minutes compared with the days
required in CyGIL-E. The agents trained in CyGIL-S are transferrable directly
to CyGIL-E showing full decision proficiency in the emulated "real" network.
Enabling offline RL, the CyGIL solution presents a promising direction towards
sim-to-real for leveraging RL agents in real-world cyber networks.
Related papers
- Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks [93.38375271826202]
We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks.
We first build a simulator by integrating Gaussian splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks.
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
arXiv Detail & Related papers (2024-06-21T13:48:37Z) - Towards Autonomous Cyber Operation Agents: Exploring the Red Case [3.805031560408777]
Reinforcement and deep reinforcement learning (RL/DRL) have been applied to develop autonomous agents for cyber network operations (CyOps)
The training environment must simulate CyOps with high fidelity, which the agent aims to learn and accomplish.
A good simulator is hard to achieve due to the extreme complexity of the cyber environment.
arXiv Detail & Related papers (2023-09-05T13:56:31Z) - Rethinking Closed-loop Training for Autonomous Driving [82.61418945804544]
We present the first empirical study which analyzes the effects of different training benchmark designs on the success of learning agents.
We propose trajectory value learning (TRAVL), an RL-based driving agent that performs planning with multistep look-ahead.
Our experiments show that TRAVL can learn much faster and produce safer maneuvers compared to all the baselines.
arXiv Detail & Related papers (2023-06-27T17:58:39Z) - Unified Emulation-Simulation Training Environment for Autonomous Cyber
Agents [2.6001628868861504]
This work presents a solution to automatically generate a high-fidelity simulator in the Cyber Gym for Intelligent Learning (CyGIL)
CyGIL provides a unified CyOp training environment where an emulated CyGIL-E automatically generates a simulated CyGIL-S.
The simulator generation is integrated with the agent training process to further reduce the required agent training time.
arXiv Detail & Related papers (2023-04-03T15:00:32Z) - Parallel Reinforcement Learning Simulation for Visual Quadrotor
Navigation [4.597465975849579]
Reinforcement learning (RL) is an agent-based approach for teaching robots to navigate within the physical world.
We present a simulation framework, built on AirSim, which provides efficient parallel training.
Building on this framework, Ape-X is modified to incorporate decentralised training of AirSim environments.
arXiv Detail & Related papers (2022-09-22T15:27:42Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - Learning Connectivity-Maximizing Network Configurations [123.01665966032014]
We propose a supervised learning approach with a convolutional neural network (CNN) that learns to place communication agents from an expert.
We demonstrate the performance of our CNN on canonical line and ring topologies, 105k randomly generated test cases, and larger teams not seen during training.
After training, our system produces connected configurations 2 orders of magnitude faster than the optimization-based scheme for teams of 10-20 agents.
arXiv Detail & Related papers (2021-12-14T18:59:01Z) - DriverGym: Democratising Reinforcement Learning for Autonomous Driving [75.91049219123899]
We propose DriverGym, an open-source environment for developing reinforcement learning algorithms for autonomous driving.
DriverGym provides access to more than 1000 hours of expert logged data and also supports reactive and data-driven agent behavior.
The performance of an RL policy can be easily validated on real-world data using our extensive and flexible closed-loop evaluation protocol.
arXiv Detail & Related papers (2021-11-12T11:47:08Z) - CyGIL: A Cyber Gym for Training Autonomous Agents over Emulated Network
Systems [3.2550963598419957]
CyGIL is an experimental testbed of an emulated RL training environment for network cyber operations.
It uses a stateless environment architecture and incorporates the MITRE ATT&CK framework to establish a high fidelity training environment.
Its comprehensive action space and flexible game design allow the agent training to focus on particular advanced persistent threat (APT) profiles.
arXiv Detail & Related papers (2021-09-07T20:52:44Z) - Robust Reinforcement Learning-based Autonomous Driving Agent for
Simulation and Real World [0.0]
We present a DRL-based algorithm that is capable of performing autonomous robot control using Deep Q-Networks (DQN)
In our approach, the agent is trained in a simulated environment and it is able to navigate both in a simulated and real-world environment.
The trained agent is able to run on limited hardware resources and its performance is comparable to state-of-the-art approaches.
arXiv Detail & Related papers (2020-09-23T15:23:54Z) - RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real [74.45688231140689]
We introduce the RL-scene consistency loss for image translation, which ensures that the translation operation is invariant with respect to the Q-values associated with the image.
We obtain RL-CycleGAN, a new approach for simulation-to-real-world transfer for reinforcement learning.
arXiv Detail & Related papers (2020-06-16T08:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.