Dealing with Adversarial Player Strategies in the Neural Network Game
iNNk through Ensemble Learning
- URL: http://arxiv.org/abs/2107.02052v1
- Date: Mon, 5 Jul 2021 14:25:44 GMT
- Title: Dealing with Adversarial Player Strategies in the Neural Network Game
iNNk through Ensemble Learning
- Authors: Mathias L\"owe, Jennifer Villareale, Evan Freed, Aleksanteri Sladek,
Jichen Zhu, Sebastian Risi
- Abstract summary: In this paper, we focus on the adversarial player strategy aspect in the game iNNk.
We present a method that combines transfer learning and ensemble methods to obtain a data-efficient adaptation to these strategies.
We expect the methods developed in this paper to be useful for the rapidly growing field of NN-based games.
- Score: 10.30864720221571
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Applying neural network (NN) methods in games can lead to various new and
exciting game dynamics not previously possible. However, they also lead to new
challenges such as the lack of large, clean datasets, varying player skill
levels, and changing gameplay strategies. In this paper, we focus on the
adversarial player strategy aspect in the game iNNk, in which players try to
communicate secret code words through drawings with the goal of not being
deciphered by a NN. Some strategies exploit weaknesses in the NN that
consistently trick it into making incorrect classifications, leading to
unbalanced gameplay. We present a method that combines transfer learning and
ensemble methods to obtain a data-efficient adaptation to these strategies.
This combination significantly outperforms the baseline NN across all
adversarial player strategies despite only being trained on a limited set of
adversarial examples. We expect the methods developed in this paper to be
useful for the rapidly growing field of NN-based games, which will require new
approaches to deal with unforeseen player creativity.
Related papers
- In-Context Exploiter for Extensive-Form Games [38.24471816329584]
We introduce a novel method, In-Context Exploiter (ICE), to train a single model that can act as any player in the game and adaptively exploit opponents entirely by in-context learning.
Our ICE algorithm involves generating diverse opponent strategies, collecting interactive history data by a reinforcement learning algorithm, and training a transformer-based agent within a well-designed curriculum learning framework.
arXiv Detail & Related papers (2024-08-10T14:59:09Z) - Neural Population Learning beyond Symmetric Zero-sum Games [52.20454809055356]
We introduce NeuPL-JPSRO, a neural population learning algorithm that benefits from transfer learning of skills and converges to a Coarse Correlated (CCE) of the game.
Our work shows that equilibrium convergent population learning can be implemented at scale and in generality.
arXiv Detail & Related papers (2024-01-10T12:56:24Z) - All by Myself: Learning Individualized Competitive Behaviour with a
Contrastive Reinforcement Learning optimization [57.615269148301515]
In a competitive game scenario, a set of agents have to learn decisions that maximize their goals and minimize their adversaries' goals at the same time.
We propose a novel model composed of three neural layers that learn a representation of a competitive game, learn how to map the strategy of specific opponents, and how to disrupt them.
Our experiments demonstrate that our model achieves better performance when playing against offline, online, and competitive-specific models, in particular when playing against the same opponent multiple times.
arXiv Detail & Related papers (2023-10-02T08:11:07Z) - Mastering Percolation-like Games with Deep Learning [0.0]
We devise a single-player game on a lattice that mimics the logic of an attacker attempting to destroy a network.
The objective of the game is to disable all nodes in the fewest number of steps.
We train agents on different definitions of robustness and compare the learned strategies.
arXiv Detail & Related papers (2023-05-12T15:37:45Z) - Finding mixed-strategy equilibria of continuous-action games without
gradients using randomized policy networks [83.28949556413717]
We study the problem of computing an approximate Nash equilibrium of continuous-action game without access to gradients.
We model players' strategies using artificial neural networks.
This paper is the first to solve general continuous-action games with unrestricted mixed strategies and without any gradient information.
arXiv Detail & Related papers (2022-11-29T05:16:41Z) - Game Theoretic Rating in N-player general-sum games with Equilibria [26.166859475522106]
We propose novel algorithms suitable for N-player, general-sum rating of strategies in normal-form games according to the payoff rating system.
This enables well-established solution concepts, such as equilibria, to be leveraged to efficiently rate strategies in games with complex strategic interactions.
arXiv Detail & Related papers (2022-10-05T12:33:03Z) - Learning Generative Deception Strategies in Combinatorial Masking Games [27.2744631811653]
One way deception can be employed is through obscuring, or masking, some of the information about how systems are configured.
We present a novel game-theoretic model of the resulting defender-attacker interaction, where the defender chooses a subset of attributes to mask, while the attacker responds by choosing an exploit to execute.
We present a novel highly scalable approach for approximately solving such games by representing the strategies of both players as neural networks.
arXiv Detail & Related papers (2021-09-23T20:42:44Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - L2E: Learning to Exploit Your Opponent [66.66334543946672]
We propose a novel Learning to Exploit framework for implicit opponent modeling.
L2E acquires the ability to exploit opponents by a few interactions with different opponents during training.
We propose a novel opponent strategy generation algorithm that produces effective opponents for training automatically.
arXiv Detail & Related papers (2021-02-18T14:27:59Z) - iNNk: A Multi-Player Game to Deceive a Neural Network [9.996299325641939]
iNNK is a multiplayer drawing game where human players team up against an NN.
The players need to successfully communicate a secret code word to each other through drawings, without being deciphered by the NN.
arXiv Detail & Related papers (2020-07-17T18:25:10Z) - Learning to Play Sequential Games versus Unknown Opponents [93.8672371143881]
We consider a repeated sequential game between a learner, who plays first, and an opponent who responds to the chosen action.
We propose a novel algorithm for the learner when playing against an adversarial sequence of opponents.
Our results include algorithm's regret guarantees that depend on the regularity of the opponent's response.
arXiv Detail & Related papers (2020-07-10T09:33:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.