Network Defense is Not a Game
- URL: http://arxiv.org/abs/2104.10262v1
- Date: Tue, 20 Apr 2021 21:52:51 GMT
- Title: Network Defense is Not a Game
- Authors: Andres Molina-Markham, Ransom K. Winder, Ahmad Ridley
- Abstract summary: Research seeks to apply Artificial Intelligence to scale and extend the capabilities of human operators to defend networks.
Our position is that network defense is better characterized as a collection of games with uncertain and possibly drifting rules.
We propose to define network defense tasks as distributions of network environments.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Research seeks to apply Artificial Intelligence (AI) to scale and extend the
capabilities of human operators to defend networks. A fundamental problem that
hinders the generalization of successful AI approaches -- i.e., beating humans
at playing games -- is that network defense cannot be defined as a single game
with a fixed set of rules. Our position is that network defense is better
characterized as a collection of games with uncertain and possibly drifting
rules. Hence, we propose to define network defense tasks as distributions of
network environments, to: (i) enable research to apply modern AI techniques,
such as unsupervised curriculum learning and reinforcement learning for network
defense; and, (ii) facilitate the design of well-defined challenges that can be
used to compare approaches for autonomous cyberdefense.
To demonstrate that an approach for autonomous network defense is practical
it is important to be able to reason about the boundaries of its applicability.
Hence, we need to be able to define network defense tasks that capture sets of
adversarial tactics, techniques, and procedures (TTPs); quality of service
(QoS) requirements; and TTPs available to defenders. Furthermore, the
abstractions to define these tasks must be extensible; must be backed by
well-defined semantics that allow us to reason about distributions of
environments; and should enable the generation of data and experiences from
which an agent can learn.
Our approach named Network Environment Design for Autonomous Cyberdefense
inspired the architecture of FARLAND, a Framework for Advanced Reinforcement
Learning for Autonomous Network Defense, which we use at MITRE to develop RL
network defenders that perform blue actions from the MITRE Shield matrix
against attackers with TTPs that drift from MITRE ATT&CK TTPs.
Related papers
- Hierarchical Multi-agent Reinforcement Learning for Cyber Network Defense [7.967738380932909]
We propose a hierarchical Proximal Policy Optimization (PPO) architecture that decomposes the cyber defense task into specific sub-tasks like network investigation and host recovery.
Our approach involves training sub-policies for each sub-task using PPO enhanced with domain expertise.
These sub-policies are then leveraged by a master defense policy that coordinates their selection to solve complex network defense tasks.
arXiv Detail & Related papers (2024-10-22T18:35:05Z) - Autonomous Network Defence using Reinforcement Learning [1.7249361224827533]
We investigate the effectiveness of autonomous agents in a realistic network defence scenario.
We show that a novel reinforcement learning agent can reliably defend continual attacks by two advanced persistent threat (APT) red agents.
arXiv Detail & Related papers (2024-09-26T18:24:09Z) - Learning Cyber Defence Tactics from Scratch with Multi-Agent
Reinforcement Learning [4.796742432333795]
Team of intelligent agents in computer network defence roles may reveal promising avenues to safeguard cyber and kinetic assets.
Agents are evaluated on their ability to jointly mitigate attacker activity in host-based defence scenarios.
arXiv Detail & Related papers (2023-08-25T14:07:50Z) - Inroads into Autonomous Network Defence using Explained Reinforcement
Learning [0.5949779668853555]
This paper introduces an end-to-end methodology for studying attack strategies, designing defence agents and explaining their operation.
We use state diagrams, deep reinforcement learning agents trained on different parts of the task and organised in a shallow hierarchy.
Our evaluation shows that the resulting design achieves a substantial performance improvement compared to prior work.
arXiv Detail & Related papers (2023-06-15T17:53:14Z) - Graph Neural Networks for Decentralized Multi-Agent Perimeter Defense [111.9039128130633]
We develop an imitation learning framework that learns a mapping from defenders' local perceptions and their communication graph to their actions.
We run perimeter defense games in scenarios with different team sizes and configurations to demonstrate the performance of the learned network.
arXiv Detail & Related papers (2023-01-23T19:35:59Z) - Learning Decentralized Strategies for a Perimeter Defense Game with
Graph Neural Networks [111.9039128130633]
We design a graph neural network-based learning framework to learn a mapping from defenders' local perceptions and the communication graph to defenders' actions.
We demonstrate that our proposed networks stay closer to the expert policy and are superior to other baseline algorithms by capturing more intruders.
arXiv Detail & Related papers (2022-09-24T22:48:51Z) - Fixed Points in Cyber Space: Rethinking Optimal Evasion Attacks in the
Age of AI-NIDS [70.60975663021952]
We study blackbox adversarial attacks on network classifiers.
We argue that attacker-defender fixed points are themselves general-sum games with complex phase transitions.
We show that a continual learning approach is required to study attacker-defender dynamics.
arXiv Detail & Related papers (2021-11-23T23:42:16Z) - Automating Privilege Escalation with Deep Reinforcement Learning [71.87228372303453]
In this work, we exemplify the potential threat of malicious actors using deep reinforcement learning to train automated agents.
We present an agent that uses a state-of-the-art reinforcement learning algorithm to perform local privilege escalation.
Our agent is usable for generating realistic attack sensor data for training and evaluating intrusion detection systems.
arXiv Detail & Related papers (2021-10-04T12:20:46Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - HYDRA: Pruning Adversarially Robust Neural Networks [58.061681100058316]
Deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size.
We propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune.
We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously.
arXiv Detail & Related papers (2020-02-24T19:54:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.