States as Strings as Strategies: Steering Language Models with
Game-Theoretic Solvers
- URL: http://arxiv.org/abs/2402.01704v2
- Date: Tue, 6 Feb 2024 08:53:11 GMT
- Title: States as Strings as Strategies: Steering Language Models with
Game-Theoretic Solvers
- Authors: Ian Gemp, Yoram Bachrach, Marc Lanctot, Roma Patel, Vibhavari Dasagi,
Luke Marris, Georgios Piliouras, Siqi Liu, Karl Tuyls
- Abstract summary: A suitable model of the players, strategies, and payoffs associated with linguistic interactions would enable existing game-theoretic algorithms to provide strategic solutions in the space of language.
We present one possible binding from dialogue to game theory as well as generalizations of existing equilibrium finding algorithms to this setting.
- Score: 44.64118885012762
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Game theory is the study of mathematical models of strategic interactions
among rational agents. Language is a key medium of interaction for humans,
though it has historically proven difficult to model dialogue and its strategic
motivations mathematically. A suitable model of the players, strategies, and
payoffs associated with linguistic interactions (i.e., a binding to the
conventional symbolic logic of game theory) would enable existing
game-theoretic algorithms to provide strategic solutions in the space of
language. In other words, a binding could provide a route to computing stable,
rational conversational strategies in dialogue. Large language models (LLMs)
have arguably reached a point where their generative capabilities can enable
realistic, human-like simulations of natural dialogue. By prompting them in
various ways, we can steer their responses towards different output utterances.
Leveraging the expressivity of natural language, LLMs can also help us quickly
generate new dialogue scenarios, which are grounded in real world applications.
In this work, we present one possible binding from dialogue to game theory as
well as generalizations of existing equilibrium finding algorithms to this
setting. In addition, by exploiting LLMs generation capabilities along with our
proposed binding, we can synthesize a large repository of formally-defined
games in which one can study and test game-theoretic solution concepts. We also
demonstrate how one can combine LLM-driven game generation, game-theoretic
solvers, and imitation learning to construct a process for improving the
strategic capabilities of LLMs.
Related papers
- Learning Strategic Language Agents in the Werewolf Game with Iterative Latent Space Policy Optimization [13.496120603859701]
Large language model (LLM)-based agents have recently shown impressive progress in a variety of domains.
Applying these agents to social deduction games such as Werewolf, which requires both strategic decision-making and free-form language interaction, remains non-trivial.
We propose Latent Space Policy Optimization (LSPO), an iterative framework that addresses these challenges by first mapping free-form text to a discrete latent space.
arXiv Detail & Related papers (2025-02-07T06:19:55Z) - Verbalized Bayesian Persuasion [54.55974023595722]
Information design (ID) explores how a sender influence the optimal behavior of receivers to achieve specific objectives.
This work proposes a verbalized framework in Bayesian persuasion (BP), which extends classic BP to real-world games involving human dialogues for the first time.
Numerical experiments in dialogue scenarios, such as recommendation letters, courtroom interactions, and law enforcement, validate that our framework can both reproduce theoretical results in classic BP and discover effective persuasion strategies.
arXiv Detail & Related papers (2025-02-03T18:20:10Z) - Autoformalization of Game Descriptions using Large Language Models [3.5083201638203154]
We introduce a framework for the autoformalization of game-theoretic scenarios.
This translates natural language descriptions into formal logic representations suitable for formal solvers.
We evaluate the framework using GPT-4o and a dataset of natural language problem descriptions.
arXiv Detail & Related papers (2024-09-18T20:18:53Z) - LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language
Models [56.25156596019168]
This paper introduces the LMRL-Gym benchmark for evaluating multi-turn RL for large language models (LLMs)
Our benchmark consists of 8 different language tasks, which require multiple rounds of language interaction and cover a range of tasks in open-ended dialogue and text games.
arXiv Detail & Related papers (2023-11-30T03:59:31Z) - Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations [70.7884839812069]
Large language models (LLMs) have emerged as powerful and general solutions to many natural language tasks.
However, many of the most important applications of language generation are interactive, where an agent has to talk to a person to reach a desired outcome.
In this work, we explore a new method for adapting LLMs with RL for such goal-directed dialogue.
arXiv Detail & Related papers (2023-11-09T18:45:16Z) - ALYMPICS: LLM Agents Meet Game Theory -- Exploring Strategic
Decision-Making with AI Agents [77.34720446306419]
Alympics is a systematic simulation framework utilizing Large Language Model (LLM) agents for game theory research.
Alympics creates a versatile platform for studying complex game theory problems.
arXiv Detail & Related papers (2023-11-06T16:03:46Z) - Leveraging Word Guessing Games to Assess the Intelligence of Large
Language Models [105.39236338147715]
The paper is inspired by the popular language game Who is Spy''
We develop DEEP to evaluate LLMs' expression and disguising abilities.
We then introduce SpyGame, an interactive multi-agent framework.
arXiv Detail & Related papers (2023-10-31T14:37:42Z) - Exploring Large Language Models for Communication Games: An Empirical Study on Werewolf [19.39740531672788]
We propose a tuning-free framework to engage large language models in communication games.
An empirical study on the representative and widely-studied communication game, Werewolf'', demonstrates that our framework can effectively play Werewolf game without tuning the parameters of the LLMs.
arXiv Detail & Related papers (2023-09-09T01:56:40Z) - Strategic Reasoning with Language Models [35.63300060111918]
Strategic reasoning enables agents to cooperate, communicate, and compete with other agents in diverse situations.
Existing approaches to solving strategic games rely on extensive training, yielding strategies that do not generalize to new scenarios or games without retraining.
This paper introduces an approach that uses pretrained Large Language Models with few-shot chain-of-thought examples to enable strategic reasoning for AI agents.
arXiv Detail & Related papers (2023-05-30T16:09:19Z) - Inner Monologue: Embodied Reasoning through Planning with Language
Models [81.07216635735571]
Large Language Models (LLMs) can be applied to domains beyond natural language processing.
LLMs planning in embodied environments need to consider not just what skills to do, but also how and when to do them.
We propose that by leveraging environment feedback, LLMs are able to form an inner monologue that allows them to more richly process and plan in robotic control scenarios.
arXiv Detail & Related papers (2022-07-12T15:20:48Z) - Emergent Communication of Generalizations [13.14792537601313]
We argue that communicating about a single object in a shared visual context is prone to overfitting and does not encourage language useful beyond concrete reference.
We propose games that require communicating generalizations over sets of objects representing abstract visual concepts.
We find that these games greatly improve systematicity and interpretability of the learned languages.
arXiv Detail & Related papers (2021-06-04T19:02:18Z) - Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games [64.11746320061965]
We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
arXiv Detail & Related papers (2020-10-22T12:40:22Z) - I love your chain mail! Making knights smile in a fantasy game world:
Open-domain goal-oriented dialogue agents [69.68400056148336]
We train a goal-oriented model with reinforcement learning against an imitation-learned chit-chat'' model.
We show that both models outperform an inverse model baseline and can converse naturally with their dialogue partner in order to achieve goals.
arXiv Detail & Related papers (2020-02-07T16:22:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.