Efficacy of Language Model Self-Play in Non-Zero-Sum Games
- URL: http://arxiv.org/abs/2406.18872v1
- Date: Thu, 27 Jun 2024 03:52:35 GMT
- Title: Efficacy of Language Model Self-Play in Non-Zero-Sum Games
- Authors: Austen Liao, Nicholas Tomlin, Dan Klein,
- Abstract summary: Game-playing agents like AlphaGo have achieved superhuman performance through self-play.
We empirically investigate whether techniques like self-play can effectively be used to improve language models.
We find that language model self-play leads to significant performance gains in both cooperation and competition with humans.
- Score: 38.644991461153275
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Game-playing agents like AlphaGo have achieved superhuman performance through self-play, which is theoretically guaranteed to yield optimal policies in competitive games. However, most language tasks are partially or fully cooperative, so it is an open question whether techniques like self-play can effectively be used to improve language models. We empirically investigate this question in a negotiation game setting known as Deal or No Deal (DoND). Crucially, the objective in DoND can be modified to produce a fully cooperative game, a strictly competitive one, or anything in between. We finetune language models in self-play over multiple rounds of filtered behavior cloning in DoND for each of these objectives. Contrary to expectations, we find that language model self-play leads to significant performance gains in both cooperation and competition with humans, suggesting that self-play and related techniques have promise despite a lack of theoretical guarantees.
Related papers
- Understanding Players as if They Are Talking to the Game in a Customized Language: A Pilot Study [3.4333699338998693]
This pilot study explores the application of language models (LMs) to model game event sequences.
We transform raw event data into textual sequences and pretraining a Longformer model on this data.
The results demonstrate the potential of self-supervised LMs in enhancing game design and personalization without relying on ground-truth labels.
arXiv Detail & Related papers (2024-10-24T09:59:10Z) - Guarantees for Self-Play in Multiplayer Games via Polymatrix
Decomposability [2.2636685010313364]
Self-play is a technique for machine learning in multi-agent systems where a learning algorithm learns by interacting with copies of itself.
We show that in two-player constant-sum games, self-play that reaches Nash equilibrium is guaranteed to produce strategies that perform well against any post-training opponent.
For the first time, our results identify a structural property of multiplayer games that enable performance guarantees for the strategies produced by a broad class of self-play algorithms.
arXiv Detail & Related papers (2023-10-17T18:33:21Z) - Palm: Predicting Actions through Language Models @ Ego4D Long-Term
Action Anticipation Challenge 2023 [100.32802766127776]
Palm is a solution to the Long-Term Action Anticipation task utilizing vision-language and large language models.
It predicts future actions based on frame descriptions and action labels extracted from the input videos.
arXiv Detail & Related papers (2023-06-28T20:33:52Z) - Clembench: Using Game Play to Evaluate Chat-Optimized Language Models as
Conversational Agents [20.202525145391093]
Recent work has proposed a methodology for the systematic evaluation of "Situated Language Understanding Agents"
This paper explores: Can Large Language Models be evaluated meaningfully by exposing them to constrained game-like settings?
As a proof of concept, this paper investigates five interaction settings, showing that current chat-optimised LLMs are, to an extent, capable to follow game-play instructions.
arXiv Detail & Related papers (2023-05-22T19:56:10Z) - Improving Language Model Negotiation with Self-Play and In-Context
Learning from AI Feedback [97.54519989641388]
We study whether multiple large language models (LLMs) can autonomously improve each other in a negotiation game by playing, reflecting, and criticizing.
Only a subset of the language models we consider can self-play and improve the deal price from AI feedback.
arXiv Detail & Related papers (2023-05-17T11:55:32Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Mastering the Game of No-Press Diplomacy via Human-Regularized
Reinforcement Learning and Planning [95.78031053296513]
No-press Diplomacy is a complex strategy game involving both cooperation and competition.
We introduce a planning algorithm we call DiL-piKL that regularizes a reward-maximizing policy toward a human imitation-learned policy.
We show that DiL-piKL can be extended into a self-play reinforcement learning algorithm we call RL-DiL-piKL.
arXiv Detail & Related papers (2022-10-11T14:47:35Z) - Read Like Humans: Autonomous, Bidirectional and Iterative Language
Modeling for Scene Text Recognition [80.446770909975]
Linguistic knowledge is of great benefit to scene text recognition.
How to effectively model linguistic rules in end-to-end deep networks remains a research challenge.
We propose an autonomous, bidirectional and iterative ABINet for scene text recognition.
arXiv Detail & Related papers (2021-03-11T06:47:45Z) - Keep CALM and Explore: Language Models for Action Generation in
Text-based Games [27.00685301984832]
We propose the Contextual Action Language Model (CALM) to generate a compact set of action candidates at each game state.
We combine CALM with a reinforcement learning agent which re-ranks the generated action candidates to maximize in-game rewards.
arXiv Detail & Related papers (2020-10-06T17:36:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.