Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games
- URL: http://arxiv.org/abs/2012.03532v1
- Date: Mon, 7 Dec 2020 08:47:25 GMT
- Title: Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games
- Authors: Alessandro Sestini, Alexander Kuhnle and Andrew D. Bagdanov
- Abstract summary: Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
- Score: 137.86426963572214
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in Deep Reinforcement Learning (DRL) have largely focused on
improving the performance of agents with the aim of replacing humans in known
and well-defined environments. The use of these techniques as a game design
tool for video game production, where the aim is instead to create Non-Player
Character (NPC) behaviors, has received relatively little attention until
recently. Turn-based strategy games like Roguelikes, for example, present
unique challenges to DRL. In particular, the categorical nature of their
complex game state, composed of many entities with different attributes,
requires agents able to learn how to compare and prioritize these entities.
Moreover, this complexity often leads to agents that overfit to states seen
during training and that are unable to generalize in the face of design changes
made during development. In this paper we propose two network architectures
which, when combined with a \emph{procedural loot generation} system, are able
to better handle complex categorical state spaces and to mitigate the need for
retraining forced by design decisions. The first is based on a dense embedding
of the categorical input space that abstracts the discrete observation model
and renders trained agents more able to generalize. The second proposed
architecture is more general and is based on a Transformer network able to
reason relationally about input and input attributes. Our experimental
evaluation demonstrates that new agents have better adaptation capacity with
respect to a baseline architecture, making this framework more robust to
dynamic gameplay changes during development. Based on the results shown in this
paper, we believe that these solutions represent a step forward towards making
DRL more accessible to the gaming industry.
Related papers
- Scaling Laws for Imitation Learning in Single-Agent Games [29.941613597833133]
We investigate whether carefully scaling up model and data size can bring similar improvements in the imitation learning setting for single-agent games.
We first demonstrate our findings on a variety of Atari games, and thereafter focus on the extremely challenging game of NetHack.
We find that IL loss and mean return scale smoothly with the compute budget and are strongly correlated, resulting in power laws for training compute-optimal IL agents.
arXiv Detail & Related papers (2023-07-18T16:43:03Z) - Probing Transfer in Deep Reinforcement Learning without Task Engineering [26.637254541454773]
We evaluate the use of original game curricula supported by the Atari 2600 console as a heterogeneous transfer benchmark for deep reinforcement learning agents.
Game designers created curricula using combinations of several discrete modifications to the basic versions of games such as Space Invaders, Breakout and Freeway.
We show that zero-shot transfer from the basic games to their variations is possible, but the variance in performance is also largely explained by interactions between factors.
arXiv Detail & Related papers (2022-10-22T13:40:12Z) - Multi-Game Decision Transformers [49.257185338595434]
We show that a single transformer-based model can play a suite of up to 46 Atari games simultaneously at close-to-human performance.
We compare several approaches in this multi-game setting, such as online and offline RL methods and behavioral cloning.
We find that our Multi-Game Decision Transformer models offer the best scalability and performance.
arXiv Detail & Related papers (2022-05-30T16:55:38Z) - Improving Sample Efficiency of Value Based Models Using Attention and
Vision Transformers [52.30336730712544]
We introduce a deep reinforcement learning architecture whose purpose is to increase sample efficiency without sacrificing performance.
We propose a visually attentive model that uses transformers to learn a self-attention mechanism on the feature maps of the state representation.
We demonstrate empirically that this architecture improves sample complexity for several Atari environments, while also achieving better performance in some of the games.
arXiv Detail & Related papers (2022-02-01T19:03:03Z) - Mimicking Playstyle by Adapting Parameterized Behavior Trees in RTS
Games [0.0]
Behavior Trees (BTs) impacted the field of Artificial Intelligence (AI) in games.
BTs forced complexity of handcrafted BTs to became barely-tractable and error-prone.
Recent trends in the field focused on automatic creation of AI-agents.
We present a novel approach to semi-automatic construction of AI-agents, that mimic and generalize given human gameplays.
arXiv Detail & Related papers (2021-11-23T20:36:28Z) - Goal-Directed Design Agents: Integrating Visual Imitation with One-Step
Lookahead Optimization for Generative Design [0.0]
This note builds on DLAgents to develop goal-directed agents capable of enhancing learned strategies for sequentially generating designs.
Goal-directed DLAgents can employ human strategies learned from data along with optimizing an objective function.
This illustrates a design agent framework that can efficiently use feedback to not only enhance learned design strategies but also adapt to unseen design problems.
arXiv Detail & Related papers (2021-10-07T07:13:20Z) - DeepCrawl: Deep Reinforcement Learning for Turn-based Strategy Games [137.86426963572214]
We introduce DeepCrawl, a fully-playable Roguelike prototype for iOS and Android in which all agents are controlled by policy networks trained using Deep Reinforcement Learning (DRL)
Our aim is to understand whether recent advances in DRL can be used to develop convincing behavioral models for non-player characters in videogames.
arXiv Detail & Related papers (2020-12-03T13:53:29Z) - Learning to Simulate Dynamic Environments with GameGAN [109.25308647431952]
In this paper, we aim to learn a simulator by simply watching an agent interact with an environment.
We introduce GameGAN, a generative model that learns to visually imitate a desired game by ingesting screenplay and keyboard actions during training.
arXiv Detail & Related papers (2020-05-25T14:10:17Z) - Learn to Interpret Atari Agents [106.21468537372995]
Region-sensitive Rainbow (RS-Rainbow) is an end-to-end trainable network based on the original Rainbow, a powerful deep Q-network agent.
Our proposed agent, named region-sensitive Rainbow (RS-Rainbow), is an end-to-end trainable network based on the original Rainbow, a powerful deep Q-network agent.
arXiv Detail & Related papers (2018-12-29T03:35:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.