A Multiagent CyberBattleSim for RL Cyber Operation Agents
- URL: http://arxiv.org/abs/2304.11052v1
- Date: Mon, 3 Apr 2023 20:43:19 GMT
- Title: A Multiagent CyberBattleSim for RL Cyber Operation Agents
- Authors: Thomas Kunz, Christian Fisher, James La Novara-Gsell, Christopher
Nguyen, Li Li
- Abstract summary: CyberBattleSim is a training environment that supports the training of red agents, i.e., attackers.
We added the capability to train blue agents, i.e., defenders.
Our results show that training a blue agent does lead to stronger defenses against attacks.
- Score: 2.789574233231923
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Hardening cyber physical assets is both crucial and labor-intensive.
Recently, Machine Learning (ML) in general and Reinforcement Learning RL) more
specifically has shown great promise to automate tasks that otherwise would
require significant human insight/intelligence. The development of autonomous
RL agents requires a suitable training environment that allows us to quickly
evaluate various alternatives, in particular how to arrange training scenarios
that pit attackers and defenders against each other. CyberBattleSim is a
training environment that supports the training of red agents, i.e., attackers.
We added the capability to train blue agents, i.e., defenders. The paper
describes our changes and reports on the results we obtained when training blue
agents, either in isolation or jointly with red agents. Our results show that
training a blue agent does lead to stronger defenses against attacks. In
particular, training a blue agent jointly with a red agent increases the blue
agent's capability to thwart sophisticated red agents.
Related papers
- Multi-Objective Reinforcement Learning for Automated Resilient Cyber Defence [0.0]
Cyber-attacks pose a security threat to military command and control networks, Intelligence, Surveillance, and Reconnaissance (ISR) systems, and civilian critical national infrastructure.
The use of artificial intelligence and autonomous agents in these attacks increases the scale, range, and complexity of this threat and the subsequent disruption they cause.
Autonomous Cyber Defence (ACD) agents aim to mitigate this threat by responding at machine speed and at the scale required to address the problem.
arXiv Detail & Related papers (2024-11-26T16:51:52Z) - Autonomous Network Defence using Reinforcement Learning [1.7249361224827533]
We investigate the effectiveness of autonomous agents in a realistic network defence scenario.
We show that a novel reinforcement learning agent can reliably defend continual attacks by two advanced persistent threat (APT) red agents.
arXiv Detail & Related papers (2024-09-26T18:24:09Z) - Toward Optimal LLM Alignments Using Two-Player Games [86.39338084862324]
In this paper, we investigate alignment through the lens of two-agent games, involving iterative interactions between an adversarial and a defensive agent.
We theoretically demonstrate that this iterative reinforcement learning optimization converges to a Nash Equilibrium for the game induced by the agents.
Experimental results in safety scenarios demonstrate that learning in such a competitive environment not only fully trains agents but also leads to policies with enhanced generalization capabilities for both adversarial and defensive agents.
arXiv Detail & Related papers (2024-06-16T15:24:50Z) - On Autonomous Agents in a Cyber Defence Environment [0.0]
We explore the utility of the autonomous cyber operation environments presented as part of the Cyber Autonomy Gym for Experimentation.
CAGE Challenge 2 required a defensive Blue agent to defend a network from an attacking Red agent.
We identify four classes of algorithms, namely, Single- Agent Deep Reinforcement Learning (DRL), Hierarchical DRL, Ensembles, and Non-DRL approaches.
arXiv Detail & Related papers (2023-09-14T02:09:36Z) - Towards Autonomous Cyber Operation Agents: Exploring the Red Case [3.805031560408777]
Reinforcement and deep reinforcement learning (RL/DRL) have been applied to develop autonomous agents for cyber network operations (CyOps)
The training environment must simulate CyOps with high fidelity, which the agent aims to learn and accomplish.
A good simulator is hard to achieve due to the extreme complexity of the cyber environment.
arXiv Detail & Related papers (2023-09-05T13:56:31Z) - ProAgent: Building Proactive Cooperative Agents with Large Language
Models [89.53040828210945]
ProAgent is a novel framework that harnesses large language models to create proactive agents.
ProAgent can analyze the present state, and infer the intentions of teammates from observations.
ProAgent exhibits a high degree of modularity and interpretability, making it easily integrated into various coordination scenarios.
arXiv Detail & Related papers (2023-08-22T10:36:56Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - Room Clearance with Feudal Hierarchical Reinforcement Learning [2.867517731896504]
We introduce a new simulation environment, "it", designed as a tool to build scenarios that can drive RL research in a direction useful for military analysis.
We focus on an abstracted and simplified room clearance scenario, where a team of blue agents have to make their way through a building and ensure that all rooms are cleared of enemy red agents.
We implement a multi-agent version of feudal hierarchical RL that introduces a command hierarchy where a commander at the higher level sends orders to multiple agents at the lower level who simply have to learn to follow these orders.
We find that breaking the task down in this way allows us to
arXiv Detail & Related papers (2021-05-24T15:05:58Z) - Learning Latent Representations to Influence Multi-Agent Interaction [65.44092264843538]
We propose a reinforcement learning-based framework for learning latent representations of an agent's policy.
We show that our approach outperforms the alternatives and learns to influence the other agent.
arXiv Detail & Related papers (2020-11-12T19:04:26Z) - SMARTS: Scalable Multi-Agent Reinforcement Learning Training School for
Autonomous Driving [96.50297622371457]
Multi-agent interaction is a fundamental aspect of autonomous driving in the real world.
Despite more than a decade of research and development, the problem of how to interact with diverse road users in diverse scenarios remains largely unsolved.
We develop a dedicated simulation platform called SMARTS that generates diverse and competent driving interactions.
arXiv Detail & Related papers (2020-10-19T18:26:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.