Learning Decentralized Strategies for a Perimeter Defense Game with
Graph Neural Networks
- URL: http://arxiv.org/abs/2211.01757v1
- Date: Sat, 24 Sep 2022 22:48:51 GMT
- Title: Learning Decentralized Strategies for a Perimeter Defense Game with
Graph Neural Networks
- Authors: Elijah S. Lee, Lifeng Zhou, Alejandro Ribeiro, Vijay Kumar
- Abstract summary: We design a graph neural network-based learning framework to learn a mapping from defenders' local perceptions and the communication graph to defenders' actions.
We demonstrate that our proposed networks stay closer to the expert policy and are superior to other baseline algorithms by capturing more intruders.
- Score: 111.9039128130633
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We consider the problem of finding decentralized strategies for multi-agent
perimeter defense games. In this work, we design a graph neural network-based
learning framework to learn a mapping from defenders' local perceptions and the
communication graph to defenders' actions such that the learned actions are
close to that generated by a centralized expert algorithm. We demonstrate that
our proposed networks stay closer to the expert policy and are superior to
other baseline algorithms by capturing more intruders. Our GNN-based networks
are trained at a small scale and can generalize to large scales. To validate
our results, we run perimeter defense games in scenarios with different team
sizes and initial configurations to evaluate the performance of the learned
networks.
Related papers
- Inroads into Autonomous Network Defence using Explained Reinforcement
Learning [0.5949779668853555]
This paper introduces an end-to-end methodology for studying attack strategies, designing defence agents and explaining their operation.
We use state diagrams, deep reinforcement learning agents trained on different parts of the task and organised in a shallow hierarchy.
Our evaluation shows that the resulting design achieves a substantial performance improvement compared to prior work.
arXiv Detail & Related papers (2023-06-15T17:53:14Z) - Mastering Percolation-like Games with Deep Learning [0.0]
We devise a single-player game on a lattice that mimics the logic of an attacker attempting to destroy a network.
The objective of the game is to disable all nodes in the fewest number of steps.
We train agents on different definitions of robustness and compare the learned strategies.
arXiv Detail & Related papers (2023-05-12T15:37:45Z) - Graph Neural Networks for Decentralized Multi-Agent Perimeter Defense [111.9039128130633]
We develop an imitation learning framework that learns a mapping from defenders' local perceptions and their communication graph to their actions.
We run perimeter defense games in scenarios with different team sizes and configurations to demonstrate the performance of the learned network.
arXiv Detail & Related papers (2023-01-23T19:35:59Z) - Learning Generative Deception Strategies in Combinatorial Masking Games [27.2744631811653]
One way deception can be employed is through obscuring, or masking, some of the information about how systems are configured.
We present a novel game-theoretic model of the resulting defender-attacker interaction, where the defender chooses a subset of attributes to mask, while the attacker responds by choosing an exploit to execute.
We present a novel highly scalable approach for approximately solving such games by representing the strategies of both players as neural networks.
arXiv Detail & Related papers (2021-09-23T20:42:44Z) - Scalable Perception-Action-Communication Loops with Convolutional and
Graph Neural Networks [208.15591625749272]
We present a perception-action-communication loop design using Vision-based Graph Aggregation and Inference (VGAI)
Our framework is implemented by a cascade of a convolutional and a graph neural network (CNN / GNN), addressing agent-level visual perception and feature learning.
We demonstrate that VGAI yields performance comparable to or better than other decentralized controllers.
arXiv Detail & Related papers (2021-06-24T23:57:21Z) - Generating Adversarial Examples with Graph Neural Networks [26.74003742013481]
We propose a novel attack based on a graph neural network (GNN) that takes advantage of the strengths of both approaches.
We show that our method beats state-of-the-art adversarial attacks, including PGD-attack, MI-FGSM, and Carlini and Wagner attack.
We provide a new challenging dataset specifically designed to allow for a more illustrative comparison of adversarial attacks.
arXiv Detail & Related papers (2021-05-30T22:46:41Z) - Local Critic Training for Model-Parallel Learning of Deep Neural
Networks [94.69202357137452]
We propose a novel model-parallel learning method, called local critic training.
We show that the proposed approach successfully decouples the update process of the layer groups for both convolutional neural networks (CNNs) and recurrent neural networks (RNNs)
We also show that trained networks by the proposed method can be used for structural optimization.
arXiv Detail & Related papers (2021-02-03T09:30:45Z) - Sparsity in Deep Learning: Pruning and growth for efficient inference
and training in neural networks [78.47459801017959]
Sparsity can reduce the memory footprint of regular networks to fit mobile devices.
We describe approaches to remove and add elements of neural networks, different training strategies to achieve model sparsity, and mechanisms to exploit sparsity in practice.
arXiv Detail & Related papers (2021-01-31T22:48:50Z) - Information Obfuscation of Graph Neural Networks [96.8421624921384]
We study the problem of protecting sensitive attributes by information obfuscation when learning with graph structured data.
We propose a framework to locally filter out pre-determined sensitive attributes via adversarial training with the total variation and the Wasserstein distance.
arXiv Detail & Related papers (2020-09-28T17:55:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.