STARLING: Self-supervised Training of Text-based Reinforcement Learning Agent with Large Language Models
- URL: http://arxiv.org/abs/2406.05872v1
- Date: Sun, 9 Jun 2024 18:07:47 GMT
- Title: STARLING: Self-supervised Training of Text-based Reinforcement Learning Agent with Large Language Models
- Authors: Shreyas Basavatia, Keerthiram Murugesan, Shivam Ratnakar,
- Abstract summary: Existing environments for interactive fiction games are domain-specific or time-consuming to generate and do not train the RL agents to master a specific set of skills.
We introduce an interactive environment for self-supervised RL, STARLING, for text-based games that bootstraps the text-based RL agents with automatically generated games to boost the performance and generalization capabilities to reach a goal of the target environment.
- Score: 5.786039929801102
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Interactive fiction games have emerged as an important application to improve the generalization capabilities of language-based reinforcement learning (RL) agents. Existing environments for interactive fiction games are domain-specific or time-consuming to generate and do not train the RL agents to master a specific set of skills. In this work, we introduce an interactive environment for self-supervised RL, STARLING, for text-based games that bootstraps the text-based RL agents with automatically generated games (based on the seed set of game ideas) to boost the performance and generalization capabilities to reach a goal of the target environment. These games let the agent hone their skills on a predefined set of tasks. We create and test an environment with 100 games, generated using this automated framework that uses large language models (GPT-3) and an interactive fiction game engine (based on Inform7) to provide the user with the ability to generate more games under minimal human supervision. Experimental results based on both the human participants and baseline text-based RL agents reveal that current state-of-the-art text-based RL agents cannot use previously learned skills in new situations at the level humans can. These results enforce STARLING's potential to serve as a sandbox environment for further research in self-supervised text-based RL.
Related papers
- AMONGAGENTS: Evaluating Large Language Models in the Interactive Text-Based Social Deduction Game [12.384945632524424]
This paper focuses on creating proxies of human behavior in simulated environments, with Among Us utilized as a tool for studying simulated human behavior.
Our work demonstrates that state-of-the-art large language models (LLMs) can effectively grasp the game rules and make decisions based on the current context.
arXiv Detail & Related papers (2024-07-23T14:34:38Z) - LMRL Gym: Benchmarks for Multi-Turn Reinforcement Learning with Language
Models [56.25156596019168]
This paper introduces the LMRL-Gym benchmark for evaluating multi-turn RL for large language models (LLMs)
Our benchmark consists of 8 different language tasks, which require multiple rounds of language interaction and cover a range of tasks in open-ended dialogue and text games.
arXiv Detail & Related papers (2023-11-30T03:59:31Z) - ScriptWorld: Text Based Environment For Learning Procedural Knowledge [2.0491741153610334]
ScriptWorld is a text-based environment for teaching agents about real-world daily chores.
We provide gaming environments for 10 daily activities and perform a detailed analysis of the proposed environment.
We leverage features obtained from pre-trained language models in the RL agents.
arXiv Detail & Related papers (2023-07-08T05:43:03Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - Learning to Follow Instructions in Text-Based Games [30.713430615498375]
We study the ability of reinforcement learning agents to follow natural language instructions.
We equip RL agents with an internal structured representation of natural language instructions in the form of Linear Temporal Logic.
Our framework both supports and highlights the benefit of understanding the temporal semantics of instructions.
arXiv Detail & Related papers (2022-11-08T22:20:17Z) - Pre-trained Language Models as Prior Knowledge for Playing Text-based
Games [2.423547527175808]
In this paper, we improve the semantic understanding of the agent by proposing a simple RL with LM framework.
We perform a detailed study of our framework to demonstrate how our model outperforms all existing agents on the popular game, Zork1.
Our proposed approach also performs comparably to the state-of-the-art models on the other set of text games.
arXiv Detail & Related papers (2021-07-18T10:28:48Z) - DeepCrawl: Deep Reinforcement Learning for Turn-based Strategy Games [137.86426963572214]
We introduce DeepCrawl, a fully-playable Roguelike prototype for iOS and Android in which all agents are controlled by policy networks trained using Deep Reinforcement Learning (DRL)
Our aim is to understand whether recent advances in DRL can be used to develop convincing behavioral models for non-player characters in videogames.
arXiv Detail & Related papers (2020-12-03T13:53:29Z) - Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games [64.11746320061965]
We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
arXiv Detail & Related papers (2020-10-22T12:40:22Z) - Text-based RL Agents with Commonsense Knowledge: New Challenges,
Environments and Baselines [40.03754436370682]
We show that agents which incorporate commonsense knowledge in TextWorld Commonsense perform better, while acting more efficiently.
We conduct user-studies to estimate human performance on TWC and show that there is ample room for future improvement.
arXiv Detail & Related papers (2020-10-08T06:20:00Z) - Learning Dynamic Belief Graphs to Generalize on Text-Based Games [55.59741414135887]
Playing text-based games requires skills in processing natural language and sequential decision making.
In this work, we investigate how an agent can plan and generalize in text-based games using graph-structured representations learned end-to-end from raw text.
arXiv Detail & Related papers (2020-02-21T04:38:37Z) - Exploration Based Language Learning for Text-Based Games [72.30525050367216]
This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.
Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text.
These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents.
arXiv Detail & Related papers (2020-01-24T03:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.