Inducing game rules from varying quality game play
- URL: http://arxiv.org/abs/2008.01664v1
- Date: Tue, 4 Aug 2020 15:46:57 GMT
- Title: Inducing game rules from varying quality game play
- Authors: Alastair Flynn
- Abstract summary: General Game Playing (GGP) is a framework in which an artificial intelligence program is required to play a variety of games successfully.
IGGP is the problem of inducing general game rules from specific game observations.
We use Sancho, the 2014 GGP competition winner, to generate intelligent game traces for a large number of games.
We then use the ILP systems, Metagol, Aleph and ILASP to induce game rules from the traces.
- Score: 0.0
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: General Game Playing (GGP) is a framework in which an artificial intelligence
program is required to play a variety of games successfully. It acts as a test
bed for AI and motivator of research. The AI is given a random game description
at runtime which it then plays. The framework includes repositories of game
rules. The Inductive General Game Playing (IGGP) problem challenges machine
learning systems to learn these GGP game rules by watching the game being
played. In other words, IGGP is the problem of inducing general game rules from
specific game observations. Inductive Logic Programming (ILP) has shown to be a
promising approach to this problem though it has been demonstrated that it is
still a hard problem for ILP systems. Existing work on IGGP has always assumed
that the game player being observed makes random moves. This is not
representative of how a human learns to play a game. With random gameplay
situations that would normally be encountered when humans play are not present.
To address this limitation, we analyse the effect of using intelligent versus
random gameplay traces as well as the effect of varying the number of traces in
the training set. We use Sancho, the 2014 GGP competition winner, to generate
intelligent game traces for a large number of games. We then use the ILP
systems, Metagol, Aleph and ILASP to induce game rules from the traces. We
train and test the systems on combinations of intelligent and random data
including a mixture of both. We also vary the volume of training data. Our
results show that whilst some games were learned more effectively in some of
the experiments than others no overall trend was statistically significant. The
implications of this work are that varying the quality of training data as
described in this paper has strong effects on the accuracy of the learned game
rules; however one solution does not work for all games.
Related papers
- People use fast, goal-directed simulation to reason about novel games [75.25089384921557]
We study how people reason about a range of simple but novel connect-n style board games.
We ask people to judge how fair and how fun the games are from very little experience.
arXiv Detail & Related papers (2024-07-19T07:59:04Z) - Games of Knightian Uncertainty as AGI testbeds [2.66269503676104]
We argue that for game research to become again relevant to the AGI pathway, we need to be able to address textitKnightian uncertainty.
Agents need to be able to adapt to rapid changes in game rules on the fly with no warning, no previous data, and no model access.
arXiv Detail & Related papers (2024-06-26T08:52:34Z) - Reinforcement Learning for High-Level Strategic Control in Tower Defense Games [47.618236610219554]
In strategy games, one of the most important aspects of game design is maintaining a sense of challenge for players.
We propose an automated approach that combines traditional scripted methods with reinforcement learning.
Results show that combining a learned approach, such as reinforcement learning, with a scripted AI produces a higher-performing and more robust agent than using only AI.
arXiv Detail & Related papers (2024-06-12T08:06:31Z) - Mastering the Game of Guandan with Deep Reinforcement Learning and
Behavior Regulating [16.718186690675164]
We propose a framework named GuanZero for AI agents to master the game of Guandan.
The main contribution of this paper is about regulating agents' behavior through a carefully designed neural network encoding scheme.
arXiv Detail & Related papers (2024-02-21T07:26:06Z) - Collusion Detection in Team-Based Multiplayer Games [57.153233321515984]
We propose a system that detects colluding behaviors in team-based multiplayer games.
The proposed method analyzes the players' social relationships paired with their in-game behavioral patterns.
We then automate the detection using Isolation Forest, an unsupervised learning technique specialized in highlighting outliers.
arXiv Detail & Related papers (2022-03-10T02:37:39Z) - Public Information Representation for Adversarial Team Games [31.29335755664997]
adversarial team games reside in the asymmetric information available to the team members during the play.
Our algorithms convert a sequential team game with adversaries to a classical two-player zero-sum game.
Due to the NP-hard nature of the problem, the resulting Public Team game may be exponentially larger than the original one.
arXiv Detail & Related papers (2022-01-25T15:07:12Z) - An Unsupervised Video Game Playstyle Metric via State Discretization [20.48689549093258]
We propose the first metric for video game playstyles directly from the game observations and actions.
Our proposed method is built upon a novel scheme of learning discrete representations.
We demonstrate high playstyle accuracy of our metric in experiments on some video game platforms.
arXiv Detail & Related papers (2021-10-03T08:30:51Z) - Generating Diverse and Competitive Play-Styles for Strategy Games [58.896302717975445]
We propose Portfolio Monte Carlo Tree Search with Progressive Unpruning for playing a turn-based strategy game (Tribes)
We show how it can be parameterized so a quality-diversity algorithm (MAP-Elites) is used to achieve different play-styles while keeping a competitive level of play.
Our results show that this algorithm is capable of achieving these goals even for an extensive collection of game levels beyond those used for training.
arXiv Detail & Related papers (2021-04-17T20:33:24Z) - TotalBotWar: A New Pseudo Real-time Multi-action Game Challenge and
Competition for AI [62.997667081978825]
TotalBotWar is a new pseudo real-time multi-action challenge for game AI.
The game is based on the popular TotalWar games series where players manage an army to defeat the opponent's one.
arXiv Detail & Related papers (2020-09-18T09:13:56Z) - Evaluating Generalisation in General Video Game Playing [1.160208922584163]
This paper focuses on the challenge of the GVGAI learning track in which 3 games are selected and 2 levels are given for training, while 3 hidden levels are left for evaluation.
This setup poses a difficult challenge for current Reinforcement Learning (RL) algorithms, as they typically require much more data.
This work investigates 3 versions of the Advantage Actor-Critic (A2C) algorithm trained on a maximum of 2 levels from the available 5 from the GVGAI framework and compares their performance on all levels.
arXiv Detail & Related papers (2020-05-22T15:57:52Z) - Efficient exploration of zero-sum stochastic games [83.28949556413717]
We investigate the increasingly important and common game-solving setting where we do not have an explicit description of the game but only oracle access to it through gameplay.
During a limited-duration learning phase, the algorithm can control the actions of both players in order to try to learn the game and how to play it well.
Our motivation is to quickly learn strategies that have low exploitability in situations where evaluating the payoffs of a queried strategy profile is costly.
arXiv Detail & Related papers (2020-02-24T20:30:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.