The Effect of State Representation on LLM Agent Behavior in Dynamic Routing Games
- URL: http://arxiv.org/abs/2506.15624v1
- Date: Wed, 18 Jun 2025 16:53:38 GMT
- Title: The Effect of State Representation on LLM Agent Behavior in Dynamic Routing Games
- Authors: Lyle Goodyear, Rachel Guo, Ramesh Johari,
- Abstract summary: Large Language Models (LLMs) have shown promise as decision-makers in dynamic settings, but their stateless nature necessitates creating a natural language representation of history.<n>We present a unifying framework for systematically constructing natural language "state" representations for prompting LLM agents in repeated multi-agent games.
- Score: 6.267574471145215
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Large Language Models (LLMs) have shown promise as decision-makers in dynamic settings, but their stateless nature necessitates creating a natural language representation of history. We present a unifying framework for systematically constructing natural language "state" representations for prompting LLM agents in repeated multi-agent games. Previous work on games with LLM agents has taken an ad hoc approach to encoding game history, which not only obscures the impact of state representation on agents' behavior, but also limits comparability between studies. Our framework addresses these gaps by characterizing methods of state representation along three axes: action informativeness (i.e., the extent to which the state representation captures actions played); reward informativeness (i.e., the extent to which the state representation describes rewards obtained); and prompting style (or natural language compression, i.e., the extent to which the full text history is summarized). We apply this framework to a dynamic selfish routing game, chosen because it admits a simple equilibrium both in theory and in human subject experiments \cite{rapoport_choice_2009}. Despite the game's relative simplicity, we find that there are key dependencies of LLM agent behavior on the natural language state representation. In particular, we observe that representations which provide agents with (1) summarized, rather than complete, natural language representations of past history; (2) information about regrets, rather than raw payoffs; and (3) limited information about others' actions lead to behavior that more closely matches game theoretic equilibrium predictions, and with more stable game play by the agents. By contrast, other representations can exhibit either large deviations from equilibrium, higher variation in dynamic game play over time, or both.
Related papers
- Generative Agents for Multi-Agent Autoformalization of Interaction Scenarios [3.5083201638203154]
This work introduces the Generative Agents for Multi-Agent Autoformalization (GAMA) framework.<n>GAMA automates the formalization of interaction scenarios in simulations using agents augmented with large language models (LLMs)<n>In experiments with 110 natural language descriptions across five 2x2 simultaneous-move games, GAMA achieves 100% syntactic and 76.5% semantic correctness.
arXiv Detail & Related papers (2024-12-11T22:37:45Z) - CharacterBox: Evaluating the Role-Playing Capabilities of LLMs in Text-Based Virtual Worlds [74.02480671181685]
Role-playing is a crucial capability of Large Language Models (LLMs)<n>Current evaluation methods fall short of adequately capturing the nuanced character traits and behaviors essential for authentic role-playing.<n>We propose CharacterBox, a simulation sandbox designed to generate situational fine-grained character behavior trajectories.
arXiv Detail & Related papers (2024-12-07T12:09:35Z) - AMONGAGENTS: Evaluating Large Language Models in the Interactive Text-Based Social Deduction Game [12.384945632524424]
This paper focuses on creating proxies of human behavior in simulated environments, with Among Us utilized as a tool for studying simulated human behavior.
Our work demonstrates that state-of-the-art large language models (LLMs) can effectively grasp the game rules and make decisions based on the current context.
arXiv Detail & Related papers (2024-07-23T14:34:38Z) - player2vec: A Language Modeling Approach to Understand Player Behavior in Games [2.2216044069240657]
Methods for learning latent user representations from historical behavior logs have gained traction for recommendation tasks in e-commerce, content streaming, and other settings.
We present a novel method for overcoming this limitation by extending a long-range Transformer model to player behavior data.
We discuss specifics of behavior tracking in games and propose preprocessing and tokenization approaches by viewing in-game events in an analogous way to words in sentences.
arXiv Detail & Related papers (2024-04-05T17:29:47Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Infusing Commonsense World Models with Graph Knowledge [89.27044249858332]
We study the setting of generating narratives in an open world text adventure game.
A graph representation of the underlying game state can be used to train models that consume and output both grounded graph representations and natural language descriptions and actions.
arXiv Detail & Related papers (2023-01-13T19:58:27Z) - Generating Coherent Narratives by Learning Dynamic and Discrete Entity
States with a Contrastive Framework [68.1678127433077]
We extend the Transformer model to dynamically conduct entity state updates and sentence realization for narrative generation.
Experiments on two narrative datasets show that our model can generate more coherent and diverse narratives than strong baselines.
arXiv Detail & Related papers (2022-08-08T09:02:19Z) - Inherently Explainable Reinforcement Learning in Natural Language [14.117921448623342]
We focus on the task of creating a reinforcement learning agent that is inherently explainable.
This Hierarchically Explainable Reinforcement Learning agent operates in Interactive Fictions, text-based game environments.
Our agent is designed to treat explainability as a first-class citizen.
arXiv Detail & Related papers (2021-12-16T14:24:35Z) - Pre-trained Language Models as Prior Knowledge for Playing Text-based
Games [2.423547527175808]
In this paper, we improve the semantic understanding of the agent by proposing a simple RL with LM framework.
We perform a detailed study of our framework to demonstrate how our model outperforms all existing agents on the popular game, Zork1.
Our proposed approach also performs comparably to the state-of-the-art models on the other set of text games.
arXiv Detail & Related papers (2021-07-18T10:28:48Z) - SOCCER: An Information-Sparse Discourse State Tracking Collection in the
Sports Commentary Domain [7.119677737397071]
In pursuit of natural language understanding, there has been a long standing interest in tracking state changes throughout narratives.
This paper proposes to turn to simplified, fully observable systems that show some of these properties: Sports events.
We propose a new task formulation where, given paragraphs of commentary of a game at different timestamps, the system is asked to recognize the occurrence of in-game events.
arXiv Detail & Related papers (2021-06-03T16:21:13Z) - Learning Chess Blindfolded: Evaluating Language Models on State Tracking [69.3794549747725]
We consider the task of language modeling for the game of chess.
Unlike natural language, chess notations describe a simple, constrained, and deterministic domain.
We find that transformer language models can learn to track pieces and predict legal moves with high accuracy when trained solely on move sequences.
arXiv Detail & Related papers (2021-02-26T01:16:23Z) - A New Bandit Setting Balancing Information from State Evolution and
Corrupted Context [52.67844649650687]
We propose a new sequential decision-making setting combining key aspects of two established online learning problems with bandit feedback.
The optimal action to play at any given moment is contingent on an underlying changing state which is not directly observable by the agent.
We present an algorithm that uses a referee to dynamically combine the policies of a contextual bandit and a multi-armed bandit.
arXiv Detail & Related papers (2020-11-16T14:35:37Z) - Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games [64.11746320061965]
We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
arXiv Detail & Related papers (2020-10-22T12:40:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.