Ludax: A GPU-Accelerated Domain Specific Language for Board Games
- URL: http://arxiv.org/abs/2506.22609v1
- Date: Fri, 27 Jun 2025 20:15:53 GMT
- Title: Ludax: A GPU-Accelerated Domain Specific Language for Board Games
- Authors: Graham Todd, Alexander G. Padula, Dennis J. N. J. Soemers, Julian Togelius,
- Abstract summary: Ludax is a domain-specific language for board games which automatically compiles into hardware-accelerated code.<n>We envision Ludax as a tool to help accelerate games research generally, from RL to cognitive science.
- Score: 44.45953630612019
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Games have long been used as benchmarks and testing environments for research in artificial intelligence. A key step in supporting this research was the development of game description languages: frameworks that compile domain-specific code into playable and simulatable game environments, allowing researchers to generalize their algorithms and approaches across multiple games without having to manually implement each one. More recently, progress in reinforcement learning (RL) has been largely driven by advances in hardware acceleration. Libraries like JAX allow practitioners to take full advantage of cutting-edge computing hardware, often speeding up training and testing by orders of magnitude. Here, we present a synthesis of these strands of research: a domain-specific language for board games which automatically compiles into hardware-accelerated code. Our framework, Ludax, combines the generality of game description languages with the speed of modern parallel processing hardware and is designed to fit neatly into existing deep learning pipelines. We envision Ludax as a tool to help accelerate games research generally, from RL to cognitive science, by enabling rapid simulation and providing a flexible representation scheme. We present a detailed breakdown of Ludax's description language and technical notes on the compilation process, along with speed benchmarking and a demonstration of training RL agents. The Ludax framework, along with implementations of existing board games, is open-source and freely available.
Related papers
- Assistax: A Hardware-Accelerated Reinforcement Learning Benchmark for Assistive Robotics [18.70896736010314]
Games have dominated reinforcement learning benchmarks because they present relevant challenges, are inexpensive to run and easy to understand.<n>We introduce Assistax: an open-source benchmark designed to address challenges arising in assistive robotics tasks.<n>In terms of open-loop wall-clock time, Assistax runs up to $370times$ faster when vectorising training runs compared to CPU-based alternatives.
arXiv Detail & Related papers (2025-07-29T09:49:11Z) - Cross Language Soccer Framework: An Open Source Framework for the RoboCup 2D Soccer Simulation [0.4660328753262075]
RoboCup Soccer Simulation 2D (SS2D) research is hampered by the complexity of existing Cpp-based codes like Helios, Cyrus, and Gliders.
This development paper introduces a transformative solution a g-based, language-agnostic framework that seamlessly integrates with the high-performance Helios base code.
arXiv Detail & Related papers (2024-06-09T03:11:40Z) - LILO: Learning Interpretable Libraries by Compressing and Documenting Code [71.55208585024198]
We introduce LILO, a neurosymbolic framework that iteratively synthesizes, compresses, and documents code.
LILO combines LLM-guided program synthesis with recent algorithmic advances in automated from Stitch.
We find that AutoDoc boosts performance by helping LILO's synthesizer to interpret and deploy learned abstractions.
arXiv Detail & Related papers (2023-10-30T17:55:02Z) - LuckyMera: a Modular AI Framework for Building Hybrid NetHack Agents [7.23273667916516]
Roguelike video games offer a good trade-off in terms of complexity of the environment and computational costs.
We present LuckyMera, a flexible, modular, generalization and AI framework built around NetHack.
LuckyMera comes with a set of off-the-shelf symbolic and neural modules (called "skills"): these modules can be either hard-coded behaviors, or neural Reinforcement Learning approaches.
arXiv Detail & Related papers (2023-07-17T14:46:59Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - LOA: Logical Optimal Actions for Text-based Interaction Games [63.003353499732434]
We present Logical Optimal Actions (LOA), an action decision architecture of reinforcement learning applications.
LOA is a combination of neural network and symbolic knowledge acquisition approach for natural language interaction games.
arXiv Detail & Related papers (2021-10-21T08:36:11Z) - Leveraging Language to Learn Program Abstractions and Search Heuristics [66.28391181268645]
We introduce LAPS (Language for Abstraction and Program Search), a technique for using natural language annotations to guide joint learning of libraries and neurally-guided search models for synthesis.
When integrated into a state-of-the-art library learning system (DreamCoder), LAPS produces higher-quality libraries and improves search efficiency and generalization.
arXiv Detail & Related papers (2021-06-18T15:08:47Z) - Deep Learning for General Game Playing with Ludii and Polygames [8.752301343910775]
Combinations of Monte-Carlo tree search and Deep Neural Networks, trained through self-play, have produced state-of-the-art results for automated game-playing in many board games.
This paper describes the implementation of a bridge between Ludii and Polygames, which enables Polygames to train and evaluate models for games that are implemented and run through Ludii.
arXiv Detail & Related papers (2021-01-23T19:08:33Z) - Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games [64.11746320061965]
We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
arXiv Detail & Related papers (2020-10-22T12:40:22Z) - Efficient Reasoning in Regular Boardgames [2.909363382704072]
We present the technical side of reasoning in Regular Boardgames (RBG) language.
RBG serves as a research tool that aims to aid in the development of generalized algorithms for knowledge inference, analysis, generation, learning, and playing games.
arXiv Detail & Related papers (2020-06-15T11:42:08Z) - Lyceum: An efficient and scalable ecosystem for robot learning [11.859894139914754]
Lyceum is a high-performance computational ecosystem for robot learning.
It is built on top of the Julia programming language and the MuJoCo physics simulator.
It is 5-30x faster than other popular abstractions like OpenAI's Gym and DeepMind's dm-control.
arXiv Detail & Related papers (2020-01-21T05:03:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.