Pre-trained Language Models as Prior Knowledge for Playing Text-based
Games
- URL: http://arxiv.org/abs/2107.08408v1
- Date: Sun, 18 Jul 2021 10:28:48 GMT
- Title: Pre-trained Language Models as Prior Knowledge for Playing Text-based
Games
- Authors: Ishika Singh and Gargi Singh and Ashutosh Modi
- Abstract summary: In this paper, we improve the semantic understanding of the agent by proposing a simple RL with LM framework.
We perform a detailed study of our framework to demonstrate how our model outperforms all existing agents on the popular game, Zork1.
Our proposed approach also performs comparably to the state-of-the-art models on the other set of text games.
- Score: 2.423547527175808
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Recently, text world games have been proposed to enable artificial agents to
understand and reason about real-world scenarios. These text-based games are
challenging for artificial agents, as it requires understanding and interaction
using natural language in a partially observable environment. In this paper, we
improve the semantic understanding of the agent by proposing a simple RL with
LM framework where we use transformer-based language models with Deep RL
models. We perform a detailed study of our framework to demonstrate how our
model outperforms all existing agents on the popular game, Zork1, to achieve a
score of 44.7, which is 1.6 higher than the state-of-the-art model. Our
proposed approach also performs comparably to the state-of-the-art models on
the other set of text games.
Related papers
- STARLING: Self-supervised Training of Text-based Reinforcement Learning Agent with Large Language Models [5.786039929801102]
Existing environments for interactive fiction games are domain-specific or time-consuming to generate and do not train the RL agents to master a specific set of skills.
We introduce an interactive environment for self-supervised RL, STARLING, for text-based games that bootstraps the text-based RL agents with automatically generated games to boost the performance and generalization capabilities to reach a goal of the target environment.
arXiv Detail & Related papers (2024-06-09T18:07:47Z) - Learning to Model the World with Language [100.76069091703505]
To interact with humans and act in the world, agents need to understand the range of language that people use and relate it to the visual world.
Our key idea is that agents should interpret such diverse language as a signal that helps them predict the future.
We instantiate this in Dynalang, an agent that learns a multimodal world model to predict future text and image representations.
arXiv Detail & Related papers (2023-07-31T17:57:49Z) - ScriptWorld: Text Based Environment For Learning Procedural Knowledge [2.0491741153610334]
ScriptWorld is a text-based environment for teaching agents about real-world daily chores.
We provide gaming environments for 10 daily activities and perform a detailed analysis of the proposed environment.
We leverage features obtained from pre-trained language models in the RL agents.
arXiv Detail & Related papers (2023-07-08T05:43:03Z) - Promptable Game Models: Text-Guided Game Simulation via Masked Diffusion
Models [68.85478477006178]
We present a Promptable Game Model (PGM) for neural video game simulators.
It allows a user to play the game by prompting it with high- and low-level action sequences.
Most captivatingly, our PGM unlocks the director's mode, where the game is played by specifying goals for the agents in the form of a prompt.
Our method significantly outperforms existing neural video game simulators in terms of rendering quality and unlocks applications beyond the capabilities of the current state of the art.
arXiv Detail & Related papers (2023-03-23T17:43:17Z) - Infusing Commonsense World Models with Graph Knowledge [89.27044249858332]
We study the setting of generating narratives in an open world text adventure game.
A graph representation of the underlying game state can be used to train models that consume and output both grounded graph representations and natural language descriptions and actions.
arXiv Detail & Related papers (2023-01-13T19:58:27Z) - Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games [64.11746320061965]
We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
arXiv Detail & Related papers (2020-10-22T12:40:22Z) - Keep CALM and Explore: Language Models for Action Generation in
Text-based Games [27.00685301984832]
We propose the Contextual Action Language Model (CALM) to generate a compact set of action candidates at each game state.
We combine CALM with a reinforcement learning agent which re-ranks the generated action candidates to maximize in-game rewards.
arXiv Detail & Related papers (2020-10-06T17:36:29Z) - Learning Dynamic Belief Graphs to Generalize on Text-Based Games [55.59741414135887]
Playing text-based games requires skills in processing natural language and sequential decision making.
In this work, we investigate how an agent can plan and generalize in text-based games using graph-structured representations learned end-to-end from raw text.
arXiv Detail & Related papers (2020-02-21T04:38:37Z) - Model-Based Reinforcement Learning for Atari [89.3039240303797]
We show how video prediction models can enable agents to solve Atari games with fewer interactions than model-free methods.
Our experiments evaluate SimPLe on a range of Atari games in low data regime of 100k interactions between the agent and the environment.
arXiv Detail & Related papers (2019-03-01T15:40:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.