NASimEmu: Network Attack Simulator & Emulator for Training Agents
Generalizing to Novel Scenarios
- URL: http://arxiv.org/abs/2305.17246v2
- Date: Fri, 18 Aug 2023 11:32:44 GMT
- Title: NASimEmu: Network Attack Simulator & Emulator for Training Agents
Generalizing to Novel Scenarios
- Authors: Jarom\'ir Janisch, Tom\'a\v{s} Pevn\'y, Viliam Lis\'y
- Abstract summary: NASimEmu is a new framework for training penetration testing agents.
It provides a simulator and an emulator with a shared interface.
We show how to use the framework to train a general agent that transfers into novel, structurally different scenarios.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Current frameworks for training offensive penetration testing agents with
deep reinforcement learning struggle to produce agents that perform well in
real-world scenarios, due to the reality gap in simulation-based frameworks and
the lack of scalability in emulation-based frameworks. Additionally, existing
frameworks often use an unrealistic metric that measures the agents'
performance on the training data. NASimEmu, a new framework introduced in this
paper, addresses these issues by providing both a simulator and an emulator
with a shared interface. This approach allows agents to be trained in
simulation and deployed in the emulator, thus verifying the realism of the used
abstraction. Our framework promotes the development of general agents that can
transfer to novel scenarios unseen during their training. For the simulation
part, we adopt an existing simulator NASim and enhance its realism. The
emulator is implemented with industry-level tools, such as Vagrant, VirtualBox,
and Metasploit. Experiments demonstrate that a simulation-trained agent can be
deployed in emulation, and we show how to use the framework to train a general
agent that transfers into novel, structurally different scenarios. NASimEmu is
available as open-source.
Related papers
- Gaussian Splatting to Real World Flight Navigation Transfer with Liquid Networks [93.38375271826202]
We present a method to improve generalization and robustness to distribution shifts in sim-to-real visual quadrotor navigation tasks.
We first build a simulator by integrating Gaussian splatting with quadrotor flight dynamics, and then, train robust navigation policies using Liquid neural networks.
In this way, we obtain a full-stack imitation learning protocol that combines advances in 3D Gaussian splatting radiance field rendering, programming of expert demonstration training data, and the task understanding capabilities of Liquid networks.
arXiv Detail & Related papers (2024-06-21T13:48:37Z) - Sim-to-real Transfer of Deep Reinforcement Learning Agents for Online Coverage Path Planning [15.792914346054502]
We tackle the challenge of sim-to-real transfer of reinforcement learning (RL) agents for coverage path planning ( CPP)
We bridge the sim-to-real gap through a semi-virtual environment with a simulated sensor and obstacles, while including real robot kinematics and real-time aspects.
We find that a high model inference frequency is sufficient for reducing the sim-to-real gap, while fine-tuning degrades performance initially.
arXiv Detail & Related papers (2024-06-07T13:24:19Z) - RealGen: Retrieval Augmented Generation for Controllable Traffic
Scenarios [62.89459646611976]
RealGen is a novel retrieval-based in-context learning framework for traffic scenario generation.
RealGen synthesizes new scenarios by combining behaviors from multiple retrieved examples in a gradient-free way.
This in-context learning framework endows versatile generative capabilities, including the ability to edit scenarios.
arXiv Detail & Related papers (2023-12-19T23:11:06Z) - Waymax: An Accelerated, Data-Driven Simulator for Large-Scale Autonomous
Driving Research [76.93956925360638]
Waymax is a new data-driven simulator for autonomous driving in multi-agent scenes.
It runs entirely on hardware accelerators such as TPUs/GPUs and supports in-graph simulation for training.
We benchmark a suite of popular imitation and reinforcement learning algorithms with ablation studies on different design decisions.
arXiv Detail & Related papers (2023-10-12T20:49:15Z) - Learning Interactive Real-World Simulators [107.12907352474005]
We explore the possibility of learning a universal simulator of real-world interaction through generative modeling.
We use the simulator to train both high-level vision-language policies and low-level reinforcement learning policies.
Video captioning models can benefit from training with simulated experience, opening up even wider applications.
arXiv Detail & Related papers (2023-10-09T19:42:22Z) - DeXtreme: Transfer of Agile In-hand Manipulation from Simulation to
Reality [64.51295032956118]
We train a policy that can perform robust dexterous manipulation on an anthropomorphic robot hand.
Our work reaffirms the possibilities of sim-to-real transfer for dexterous manipulation in diverse kinds of hardware and simulator setups.
arXiv Detail & Related papers (2022-10-25T01:51:36Z) - Parallel Reinforcement Learning Simulation for Visual Quadrotor
Navigation [4.597465975849579]
Reinforcement learning (RL) is an agent-based approach for teaching robots to navigate within the physical world.
We present a simulation framework, built on AirSim, which provides efficient parallel training.
Building on this framework, Ape-X is modified to incorporate decentralised training of AirSim environments.
arXiv Detail & Related papers (2022-09-22T15:27:42Z) - SimNet: Computer Architecture Simulation using Machine Learning [3.7019798164954336]
This work describes a concerted effort, where machine learning (ML) is used to accelerate discrete-event simulation.
A GPU-accelerated parallel simulator is implemented based on the proposed instruction latency predictor.
Its simulation accuracy and throughput are validated and evaluated against a state-of-the-art simulator.
arXiv Detail & Related papers (2021-05-12T17:31:52Z) - RL-CycleGAN: Reinforcement Learning Aware Simulation-To-Real [74.45688231140689]
We introduce the RL-scene consistency loss for image translation, which ensures that the translation operation is invariant with respect to the Q-values associated with the image.
We obtain RL-CycleGAN, a new approach for simulation-to-real-world transfer for reinforcement learning.
arXiv Detail & Related papers (2020-06-16T08:58:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.