Rinascimento: searching the behaviour space of Splendor
- URL: http://arxiv.org/abs/2106.08371v1
- Date: Tue, 15 Jun 2021 18:46:57 GMT
- Title: Rinascimento: searching the behaviour space of Splendor
- Authors: Ivan Bravi and Simon Lucas
- Abstract summary: This research is to map the behavioural space (BSpace) of a game by using a general method.
In particular, the use of event-value functions has generally shown a remarkable improvement in the coverage of the BSpace compared to agents based on classic score-based reward signals.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The use of Artificial Intelligence (AI) for play-testing is still on the
sidelines of main applications of AI in games compared to performance-oriented
game-playing. One of the main purposes of play-testing a game is gathering data
on the gameplay, highlighting good and bad features of the design of the game,
providing useful insight to the game designers for improving the design. Using
AI agents has the potential of speeding the process dramatically. The purpose
of this research is to map the behavioural space (BSpace) of a game by using a
general method. Using the MAP-Elites algorithm we search the hyperparameter
space Rinascimento AI agents and map it to the BSpace defined by several
behavioural metrics. This methodology was able to highlight both exemplary and
degenerated behaviours in the original game design of Splendor and two
variations. In particular, the use of event-value functions has generally shown
a remarkable improvement in the coverage of the BSpace compared to agents based
on classic score-based reward signals.
Related papers
- DanZero+: Dominating the GuanDan Game through Reinforcement Learning [95.90682269990705]
We develop an AI program for an exceptionally complex and popular card game called GuanDan.
We first put forward an AI program named DanZero for this game.
In order to further enhance the AI's capabilities, we apply policy-based reinforcement learning algorithm to GuanDan.
arXiv Detail & Related papers (2023-12-05T08:07:32Z) - Preference-conditioned Pixel-based AI Agent For Game Testing [1.5059676044537105]
Game-testing AI agents that learn by interaction with the environment have the potential to mitigate these challenges.
This paper proposes an agent design that mainly depends on pixel-based state observations while exploring the environment conditioned on a user's preference.
Our agent significantly outperforms state-of-the-art pixel-based game testing agents over exploration coverage and test execution quality when evaluated on a complex open-world environment resembling many aspects of real AAA games.
arXiv Detail & Related papers (2023-08-18T04:19:36Z) - Generative Personas That Behave and Experience Like Humans [3.611888922173257]
generative AI agents attempt to imitate particular playing behaviors represented as rules, rewards, or human demonstrations.
We extend the notion of behavioral procedural personas to cater for player experience, thus examining generative agents that can both behave and experience their game as humans would.
Our findings suggest that the generated agents exhibit distinctive play styles and experience responses of the human personas they were designed to imitate.
arXiv Detail & Related papers (2022-08-26T12:04:53Z) - CCPT: Automatic Gameplay Testing and Validation with
Curiosity-Conditioned Proximal Trajectories [65.35714948506032]
The Curiosity-Conditioned Proximal Trajectories (CCPT) method combines curiosity and imitation learning to train agents to explore.
We show how CCPT can explore complex environments, discover gameplay issues and design oversights in the process, and recognize and highlight them directly to game designers.
arXiv Detail & Related papers (2022-02-21T09:08:33Z) - Spatial State-Action Features for General Games [5.849736173068868]
We formulate a design and efficient implementation of spatial state-action features for general games.
These are patterns that can be trained to incentivise or disincentivise actions based on whether or not they match variables of the state in a local area.
We propose an efficient approach for evaluating active features for any given set of features.
arXiv Detail & Related papers (2022-01-17T13:34:04Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - An Empirical Study on the Generalization Power of Neural Representations
Learned via Visual Guessing Games [79.23847247132345]
This work investigates how well an artificial agent can benefit from playing guessing games when later asked to perform on novel NLP downstream tasks such as Visual Question Answering (VQA)
We propose two ways to exploit playing guessing games: 1) a supervised learning scenario in which the agent learns to mimic successful guessing games and 2) a novel way for an agent to play by itself, called Self-play via Iterated Experience Learning (SPIEL)
arXiv Detail & Related papers (2021-01-31T10:30:48Z) - Deep Policy Networks for NPC Behaviors that Adapt to Changing Design
Parameters in Roguelike Games [137.86426963572214]
Turn-based strategy games like Roguelikes, for example, present unique challenges to Deep Reinforcement Learning (DRL)
We propose two network architectures to better handle complex categorical state spaces and to mitigate the need for retraining forced by design decisions.
arXiv Detail & Related papers (2020-12-07T08:47:25Z) - Navigating the Landscape of Multiplayer Games [20.483315340460127]
We show how network measures applied to response graphs of large-scale games enable the creation of a landscape of games.
We illustrate our findings in domains ranging from canonical games to complex empirical games capturing the performance of trained agents pitted against one another.
arXiv Detail & Related papers (2020-05-04T16:58:17Z) - Disentangling Controllable Object through Video Prediction Improves
Visual Reinforcement Learning [82.25034245150582]
In many vision-based reinforcement learning problems, the agent controls a movable object in its visual field.
We propose an end-to-end learning framework to disentangle the controllable object from the observation signal.
The disentangled representation is shown to be useful for RL as additional observation channels to the agent.
arXiv Detail & Related papers (2020-02-21T05:43:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.