Fast and Knowledge-Free Deep Learning for General Game Playing (Student
Abstract)
- URL: http://arxiv.org/abs/2312.14121v1
- Date: Thu, 21 Dec 2023 18:44:19 GMT
- Title: Fast and Knowledge-Free Deep Learning for General Game Playing (Student
Abstract)
- Authors: Micha{\l} Maras, Micha{\l} K\k{e}pa, Jakub Kowalski, Marek Szyku{\l}a
- Abstract summary: We develop a method of adapting the AlphaZero model to General Game Playing (GGP)
The dataset generation uses MCTS playing instead of self-play; only the value network is used, and attention layers replace the convolutional ones.
- Score: 1.9750759888062657
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We develop a method of adapting the AlphaZero model to General Game Playing
(GGP) that focuses on faster model generation and requires less knowledge to be
extracted from the game rules. The dataset generation uses MCTS playing instead
of self-play; only the value network is used, and attention layers replace the
convolutional ones. This allows us to abandon any assumptions about the action
space and board topology. We implement the method within the Regular Boardgames
GGP system and show that we can build models outperforming the UCT baseline for
most games efficiently.
Related papers
- Autoverse: An Evolvable Game Language for Learning Robust Embodied Agents [2.624282086797512]
We introduce Autoverse, an evolvable, domain-specific language for single-player 2D grid-based games.
We demonstrate its use as a scalable training ground for Open-Ended Learning (OEL) algorithms.
arXiv Detail & Related papers (2024-07-05T02:18:02Z) - Instruction-Driven Game Engines on Large Language Models [59.280666591243154]
The IDGE project aims to democratize game development by enabling a large language model to follow free-form game rules.
We train the IDGE in a curriculum manner that progressively increases the model's exposure to complex scenarios.
Our initial progress lies in developing an IDGE for Poker, a universally cherished card game.
arXiv Detail & Related papers (2024-03-30T08:02:16Z) - Accelerate Multi-Agent Reinforcement Learning in Zero-Sum Games with
Subgame Curriculum Learning [65.36326734799587]
We present a novel subgame curriculum learning framework for zero-sum games.
It adopts an adaptive initial state distribution by resetting agents to some previously visited states.
We derive a subgame selection metric that approximates the squared distance to NE values.
arXiv Detail & Related papers (2023-10-07T13:09:37Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Unsupervised Hebbian Learning on Point Sets in StarCraft II [12.095363582092904]
We present a novel Hebbian learning method to extract the global feature of point sets in StarCraft II game units.
Our model includes encoder, LSTM, and decoder, and we train the encoder with the unsupervised learning method.
arXiv Detail & Related papers (2022-07-13T13:09:48Z) - Train on Small, Play the Large: Scaling Up Board Games with AlphaZero
and GNN [23.854093182195246]
Playing board games is considered a major challenge for both humans and AI researchers.
In this work, we look at the board as a graph and combine a graph neural network architecture inside the AlphaZero framework.
Our model can be trained quickly to play different challenging board games on multiple board sizes, without using any domain knowledge.
arXiv Detail & Related papers (2021-07-18T08:36:00Z) - Combining Off and On-Policy Training in Model-Based Reinforcement
Learning [77.34726150561087]
We propose a way to obtain off-policy targets using data from simulated games in MuZero.
Our results show that these targets speed up the training process and lead to faster convergence and higher rewards.
arXiv Detail & Related papers (2021-02-24T10:47:26Z) - Efficient Reasoning in Regular Boardgames [2.909363382704072]
We present the technical side of reasoning in Regular Boardgames (RBG) language.
RBG serves as a research tool that aims to aid in the development of generalized algorithms for knowledge inference, analysis, generation, learning, and playing games.
arXiv Detail & Related papers (2020-06-15T11:42:08Z) - Model-Based Reinforcement Learning for Atari [89.3039240303797]
We show how video prediction models can enable agents to solve Atari games with fewer interactions than model-free methods.
Our experiments evaluate SimPLe on a range of Atari games in low data regime of 100k interactions between the agent and the environment.
arXiv Detail & Related papers (2019-03-01T15:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.