Can Large Language Models Play Games? A Case Study of A Self-Play
Approach
- URL: http://arxiv.org/abs/2403.05632v1
- Date: Fri, 8 Mar 2024 19:16:29 GMT
- Title: Can Large Language Models Play Games? A Case Study of A Self-Play
Approach
- Authors: Hongyi Guo, Zhihan Liu, Yufeng Zhang, Zhaoran Wang
- Abstract summary: Large Language Models (LLMs) harness extensive data from the Internet, storing a broad spectrum of prior knowledge.
Monte-Carlo Tree Search (MCTS) is a search algorithm that provides reliable decision-making solutions.
This work introduces an innovative approach that bolsters LLMs with MCTS self-play to efficiently resolve turn-based zero-sum games.
- Score: 61.15761840203145
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Large Language Models (LLMs) harness extensive data from the Internet,
storing a broad spectrum of prior knowledge. While LLMs have proven beneficial
as decision-making aids, their reliability is hampered by limitations in
reasoning, hallucination phenomenon, and so on. On the other hand, Monte-Carlo
Tree Search (MCTS) is a heuristic search algorithm that provides reliable
decision-making solutions, achieved through recursive rollouts and self-play.
However, the effectiveness of MCTS relies heavily on heuristic pruning and
external value functions, particularly in complex decision scenarios. This work
introduces an innovative approach that bolsters LLMs with MCTS self-play to
efficiently resolve deterministic turn-based zero-sum games (DTZG), such as
chess and go, without the need for additional training. Specifically, we
utilize LLMs as both action pruners and proxies for value functions without the
need for additional training. We theoretically prove that the suboptimality of
the estimated value in our proposed method scales with $\tilde{\mathcal
O}\Bigl(\frac{|\tilde {\mathcal A}|}{\sqrt{N}} + \epsilon_\mathrm{pruner} +
\epsilon_\mathrm{critic}\Bigr)$, where \(N\) is the number of simulations,
$|\tilde {\mathcal A}|$ is the cardinality of the pruned action space by LLM,
and $\epsilon_\mathrm{pruner}$ and $\epsilon_\mathrm{critic}$ quantify the
errors incurred by adopting LLMs as action space pruner and value function
proxy, respectively. Our experiments in chess and go demonstrate the capability
of our method to address challenges beyond the scope of MCTS and improve the
performance of the directly application of LLMs.
Related papers
- ChatGLM-Math: Improving Math Problem-Solving in Large Language Models with a Self-Critique Pipeline [42.61538071832468]
Large language models (LLMs) have shown excellent mastering of human language, but still struggle in real-world applications that require mathematical problem-solving.
We tailor the Self-Critique pipeline, which addresses the challenge in the feedback learning stage of LLM alignment.
arXiv Detail & Related papers (2024-04-03T17:51:18Z) - ReLU$^2$ Wins: Discovering Efficient Activation Functions for Sparse
LLMs [91.31204876440765]
We introduce a general method that defines neuron activation through neuron output magnitudes and a tailored magnitude threshold.
To find the most efficient activation function for sparse computation, we propose a systematic framework.
We conduct thorough experiments on LLMs utilizing different activation functions, including ReLU, SwiGLU, ReGLU, and ReLU$2$.
arXiv Detail & Related papers (2024-02-06T08:45:51Z) - Alphazero-like Tree-Search can Guide Large Language Model Decoding and
Training [37.79247073276239]
Recent works like Tree-of-Thought (ToT) and Reasoning via Planning (RAP) aim to augment the reasoning capabilities of LLMs.
We present an AlphaZero-like tree-search learning framework for LLMs (termed TS-LLM)
We show how tree-search with a learned value function can guide LLM decoding.
arXiv Detail & Related papers (2023-09-29T12:20:19Z) - SatLM: Satisfiability-Aided Language Models Using Declarative Prompting [68.40726892904286]
We propose a new satisfiability-aided language modeling (SatLM) approach for improving the reasoning capabilities of large language models (LLMs)
We use an LLM to generate a declarative task specification rather than an imperative program and leverage an off-the-shelf automated theorem prover to derive the final answer.
We evaluate SATLM on 8 different datasets and show that it consistently outperforms program-aided LMs in the imperative paradigm.
arXiv Detail & Related papers (2023-05-16T17:55:51Z) - Horizon-Free and Variance-Dependent Reinforcement Learning for Latent
Markov Decision Processes [62.90204655228324]
We study regret minimization for reinforcement learning (RL) in Latent Markov Decision Processes (LMDPs) with context in hindsight.
We design a novel model-based algorithmic framework which can be instantiated with both a model-optimistic and a value-optimistic solver.
arXiv Detail & Related papers (2022-10-20T21:32:01Z) - Minimax-Optimal Multi-Agent RL in Zero-Sum Markov Games With a
Generative Model [50.38446482252857]
Two-player zero-sum Markov games are arguably the most basic setting in multi-agent reinforcement learning.
We develop a learning algorithm that learns an $varepsilon$-approximate Markov NE policy using $$ widetildeObigg.
We derive a refined regret bound for FTRL that makes explicit the role of variance-type quantities.
arXiv Detail & Related papers (2022-08-22T17:24:55Z) - Randomized Exploration for Reinforcement Learning with General Value
Function Approximation [122.70803181751135]
We propose a model-free reinforcement learning algorithm inspired by the popular randomized least squares value iteration (RLSVI) algorithm.
Our algorithm drives exploration by simply perturbing the training data with judiciously chosen i.i.d. scalar noises.
We complement the theory with an empirical evaluation across known difficult exploration tasks.
arXiv Detail & Related papers (2021-06-15T02:23:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.