Mastering Percolation-like Games with Deep Learning
- URL: http://arxiv.org/abs/2305.07687v2
- Date: Wed, 28 Jun 2023 20:35:36 GMT
- Title: Mastering Percolation-like Games with Deep Learning
- Authors: Michael M. Danziger, Omkar R. Gojala, Sean P. Cornelius
- Abstract summary: We devise a single-player game on a lattice that mimics the logic of an attacker attempting to destroy a network.
The objective of the game is to disable all nodes in the fewest number of steps.
We train agents on different definitions of robustness and compare the learned strategies.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Though robustness of networks to random attacks has been widely studied,
intentional destruction by an intelligent agent is not tractable with previous
methods. Here we devise a single-player game on a lattice that mimics the logic
of an attacker attempting to destroy a network. The objective of the game is to
disable all nodes in the fewest number of steps. We develop a reinforcement
learning approach using deep Q-learning that is capable of learning to play
this game successfully, and in so doing, to optimally attack a network. Because
the learning algorithm is universal, we train agents on different definitions
of robustness and compare the learned strategies. We find that superficially
similar definitions of robustness induce different strategies in the trained
agent, implying that optimally attacking or defending a network is sensitive
the particular objective. Our method provides a new approach to understand
network robustness, with potential applications to other discrete processes in
disordered systems.
Related papers
- Graph Neural Networks for Decentralized Multi-Agent Perimeter Defense [111.9039128130633]
We develop an imitation learning framework that learns a mapping from defenders' local perceptions and their communication graph to their actions.
We run perimeter defense games in scenarios with different team sizes and configurations to demonstrate the performance of the learned network.
arXiv Detail & Related papers (2023-01-23T19:35:59Z) - Learning Decentralized Strategies for a Perimeter Defense Game with
Graph Neural Networks [111.9039128130633]
We design a graph neural network-based learning framework to learn a mapping from defenders' local perceptions and the communication graph to defenders' actions.
We demonstrate that our proposed networks stay closer to the expert policy and are superior to other baseline algorithms by capturing more intruders.
arXiv Detail & Related papers (2022-09-24T22:48:51Z) - Autonomous Attack Mitigation for Industrial Control Systems [25.894883701063055]
Defending computer networks from cyber attack requires timely responses to alerts and threat intelligence.
We present a deep reinforcement learning approach to autonomous response and recovery in large industrial control networks.
arXiv Detail & Related papers (2021-11-03T18:08:06Z) - Game Theory for Adversarial Attacks and Defenses [0.0]
Adrial attacks can generate adversarial inputs by applying small but intentionally worst-case perturbations to samples from the dataset.
Some adversarial defense techniques are developed to improve the security and robustness of the models and avoid them being attacked.
arXiv Detail & Related papers (2021-10-08T07:38:33Z) - Learning Generative Deception Strategies in Combinatorial Masking Games [27.2744631811653]
One way deception can be employed is through obscuring, or masking, some of the information about how systems are configured.
We present a novel game-theoretic model of the resulting defender-attacker interaction, where the defender chooses a subset of attributes to mask, while the attacker responds by choosing an exploit to execute.
We present a novel highly scalable approach for approximately solving such games by representing the strategies of both players as neural networks.
arXiv Detail & Related papers (2021-09-23T20:42:44Z) - The Feasibility and Inevitability of Stealth Attacks [63.14766152741211]
We study new adversarial perturbations that enable an attacker to gain control over decisions in generic Artificial Intelligence systems.
In contrast to adversarial data modification, the attack mechanism we consider here involves alterations to the AI system itself.
arXiv Detail & Related papers (2021-06-26T10:50:07Z) - Disturbing Reinforcement Learning Agents with Corrupted Rewards [62.997667081978825]
We analyze the effects of different attack strategies based on reward perturbations on reinforcement learning algorithms.
We show that smoothly crafting adversarial rewards are able to mislead the learner, and that using low exploration probability values, the policy learned is more robust to corrupt rewards.
arXiv Detail & Related papers (2021-02-12T15:53:48Z) - Robust Federated Learning with Attack-Adaptive Aggregation [45.60981228410952]
Federated learning is vulnerable to various attacks, such as model poisoning and backdoor attacks.
We propose an attack-adaptive aggregation strategy to defend against various attacks for robust learning.
arXiv Detail & Related papers (2021-02-10T04:23:23Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z) - HYDRA: Pruning Adversarially Robust Neural Networks [58.061681100058316]
Deep learning faces two key challenges: lack of robustness against adversarial attacks and large neural network size.
We propose to make pruning techniques aware of the robust training objective and let the training objective guide the search for which connections to prune.
We demonstrate that our approach, titled HYDRA, achieves compressed networks with state-of-the-art benign and robust accuracy, simultaneously.
arXiv Detail & Related papers (2020-02-24T19:54:53Z) - NAttack! Adversarial Attacks to bypass a GAN based classifier trained to
detect Network intrusion [0.3007949058551534]
Before the rise of machine learning, network anomalies which could imply an attack, were detected using well-crafted rules.
With the advancements of machine learning for network anomaly, it is not easy for a human to understand how to bypass a cyber-defence system.
In this paper, we show that even if we build a classifier and train it with adversarial examples for network data, we can use adversarial attacks and successfully break the system.
arXiv Detail & Related papers (2020-02-20T01:54:45Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.