Unified Emulation-Simulation Training Environment for Autonomous Cyber
Agents
- URL: http://arxiv.org/abs/2304.01244v1
- Date: Mon, 3 Apr 2023 15:00:32 GMT
- Title: Unified Emulation-Simulation Training Environment for Autonomous Cyber
Agents
- Authors: Li Li, Jean-Pierre S. El Rami, Adrian Taylor, James Hailing Rao, and
Thomas Kunz
- Abstract summary: This work presents a solution to automatically generate a high-fidelity simulator in the Cyber Gym for Intelligent Learning (CyGIL)
CyGIL provides a unified CyOp training environment where an emulated CyGIL-E automatically generates a simulated CyGIL-S.
The simulator generation is integrated with the agent training process to further reduce the required agent training time.
- Score: 2.6001628868861504
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Autonomous cyber agents may be developed by applying reinforcement and deep
reinforcement learning (RL/DRL), where agents are trained in a representative
environment. The training environment must simulate with high-fidelity the
network Cyber Operations (CyOp) that the agent aims to explore. Given the
complexity of net-work CyOps, a good simulator is difficult to achieve. This
work presents a systematic solution to automatically generate a high-fidelity
simulator in the Cyber Gym for Intelligent Learning (CyGIL). Through
representation learning and continuous learning, CyGIL provides a unified CyOp
training environment where an emulated CyGIL-E automatically generates a
simulated CyGIL-S. The simulator generation is integrated with the agent
training process to further reduce the required agent training time. The agent
trained in CyGIL-S is transferrable directly to CyGIL-E showing full
transferability to the emulated "real" network. Experimental results are
presented to demonstrate the CyGIL training performance. Enabling offline RL,
the CyGIL solution presents a promising direction towards sim-to-real for
leveraging RL agents in real-world cyber networks.
Related papers
- Autonomous Vehicle Controllers From End-to-End Differentiable Simulation [60.05963742334746]
We propose a differentiable simulator and design an analytic policy gradients (APG) approach to training AV controllers.
Our proposed framework brings the differentiable simulator into an end-to-end training loop, where gradients of environment dynamics serve as a useful prior to help the agent learn a more grounded policy.
We find significant improvements in performance and robustness to noise in the dynamics, as well as overall more intuitive human-like handling.
arXiv Detail & Related papers (2024-09-12T11:50:06Z) - Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks [93.38375271826202]
We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks.
We first build a simulator by integrating Gaussian splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks.
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
arXiv Detail & Related papers (2024-06-21T13:48:37Z) - Towards Autonomous Cyber Operation Agents: Exploring the Red Case [3.805031560408777]
Reinforcement and deep reinforcement learning (RL/DRL) have been applied to develop autonomous agents for cyber network operations (CyOps)
The training environment must simulate CyOps with high fidelity, which the agent aims to learn and accomplish.
A good simulator is hard to achieve due to the extreme complexity of the cyber environment.
arXiv Detail & Related papers (2023-09-05T13:56:31Z) - Enabling A Network AI Gym for Autonomous Cyber Agents [2.789574233231923]
This work aims to enable autonomous agents for network cyber operations (CyOps) by applying reinforcement and deep reinforcement learning (RL/DRL)
The required RL training environment is particularly challenging, as it must balance the need for high-fidelity, best achieved through real network emulation, with the need for running large numbers of training episodes, best achieved using simulation.
A unified training environment namely the Cyber Gym for Intelligent Learning (CyGIL) is developed where an emulated CyGIL-E automatically generates a simulated CyGIL-S.
arXiv Detail & Related papers (2023-04-03T20:47:03Z) - Continual learning autoencoder training for a particle-in-cell
simulation via streaming [52.77024349608834]
upcoming exascale era will provide a new generation of physics simulations with high resolution.
These simulations will have a high resolution, which will impact the training of machine learning models since storing a high amount of simulation data on disk is nearly impossible.
This work presents an approach that trains a neural network concurrently to a running simulation without data on a disk.
arXiv Detail & Related papers (2022-11-09T09:55:14Z) - Parallel Reinforcement Learning Simulation for Visual Quadrotor
Navigation [4.597465975849579]
Reinforcement learning (RL) is an agent-based approach for teaching robots to navigate within the physical world.
We present a simulation framework, built on AirSim, which provides efficient parallel training.
Building on this framework, Ape-X is modified to incorporate decentralised training of AirSim environments.
arXiv Detail & Related papers (2022-09-22T15:27:42Z) - Active Predicting Coding: Brain-Inspired Reinforcement Learning for
Sparse Reward Robotic Control Problems [79.07468367923619]
We propose a backpropagation-free approach to robotic control through the neuro-cognitive computational framework of neural generative coding (NGC)
We design an agent built completely from powerful predictive coding/processing circuits that facilitate dynamic, online learning from sparse rewards.
We show that our proposed ActPC agent performs well in the face of sparse (extrinsic) reward signals and is competitive with or outperforms several powerful backprop-based RL approaches.
arXiv Detail & Related papers (2022-09-19T16:49:32Z) - CyGIL: A Cyber Gym for Training Autonomous Agents over Emulated Network
Systems [3.2550963598419957]
CyGIL is an experimental testbed of an emulated RL training environment for network cyber operations.
It uses a stateless environment architecture and incorporates the MITRE ATT&CK framework to establish a high fidelity training environment.
Its comprehensive action space and flexible game design allow the agent training to focus on particular advanced persistent threat (APT) profiles.
arXiv Detail & Related papers (2021-09-07T20:52:44Z) - DriveGAN: Towards a Controllable High-Quality Neural Simulation [147.6822288981004]
We introduce a novel high-quality neural simulator referred to as DriveGAN.
DriveGAN achieves controllability by disentangling different components without supervision.
We train DriveGAN on multiple datasets, including 160 hours of real-world driving data.
arXiv Detail & Related papers (2021-04-30T15:30:05Z) - Robust Reinforcement Learning-based Autonomous Driving Agent for
Simulation and Real World [0.0]
We present a DRL-based algorithm that is capable of performing autonomous robot control using Deep Q-Networks (DQN)
In our approach, the agent is trained in a simulated environment and it is able to navigate both in a simulated and real-world environment.
The trained agent is able to run on limited hardware resources and its performance is comparable to state-of-the-art approaches.
arXiv Detail & Related papers (2020-09-23T15:23:54Z) - RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real [74.45688231140689]
We introduce the RL-scene consistency loss for image translation, which ensures that the translation operation is invariant with respect to the Q-values associated with the image.
We obtain RL-CycleGAN, a new approach for simulation-to-real-world transfer for reinforcement learning.
arXiv Detail & Related papers (2020-06-16T08:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.