A Minimal Approach for Natural Language Action Space in Text-based Games
- URL: http://arxiv.org/abs/2305.04082v2
- Date: Wed, 29 Nov 2023 23:25:19 GMT
- Title: A Minimal Approach for Natural Language Action Space in Text-based Games
- Authors: Dongwon Kelvin Ryu, Meng Fang, Shirui Pan, Gholamreza Haffari, Ehsan
Shareghi
- Abstract summary: This paper revisits the challenge of exploring the action space in text-based games (TGs)
We propose $ epsilon$-admissible exploration, a minimal approach of utilizing admissible actions, for training phase.
We present a text-based actor-critic (TAC) agent that produces textual commands for game, solely from game observations.
- Score: 103.21433712630953
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text-based games (TGs) are language-based interactive environments for
reinforcement learning. While language models (LMs) and knowledge graphs (KGs)
are commonly used for handling large action space in TGs, it is unclear whether
these techniques are necessary or overused. In this paper, we revisit the
challenge of exploring the action space in TGs and propose $
\epsilon$-admissible exploration, a minimal approach of utilizing admissible
actions, for training phase. Additionally, we present a text-based actor-critic
(TAC) agent that produces textual commands for game, solely from game
observations, without requiring any KG or LM. Our method, on average across 10
games from Jericho, outperforms strong baselines and state-of-the-art agents
that use LM and KG. Our approach highlights that a much lighter model design,
with a fresh perspective on utilizing the information within the environments,
suffices for an effective exploration of exponentially large action spaces.
Related papers
- SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games [64.11746320061965]
We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
arXiv Detail & Related papers (2020-10-22T12:40:22Z) - Keep CALM and Explore: Language Models for Action Generation in
Text-based Games [27.00685301984832]
We propose the Contextual Action Language Model (CALM) to generate a compact set of action candidates at each game state.
We combine CALM with a reinforcement learning agent which re-ranks the generated action candidates to maximize in-game rewards.
arXiv Detail & Related papers (2020-10-06T17:36:29Z) - Interactive Fiction Game Playing as Multi-Paragraph Reading
Comprehension with Reinforcement Learning [94.50608198582636]
Interactive Fiction (IF) games with real human-written natural language texts provide a new natural evaluation for language understanding techniques.
We take a novel perspective of IF game solving and re-formulate it as Multi-Passage Reading (MPRC) tasks.
arXiv Detail & Related papers (2020-10-05T23:09:20Z) - Learning Dynamic Belief Graphs to Generalize on Text-Based Games [55.59741414135887]
Playing text-based games requires skills in processing natural language and sequential decision making.
In this work, we investigate how an agent can plan and generalize in text-based games using graph-structured representations learned end-to-end from raw text.
arXiv Detail & Related papers (2020-02-21T04:38:37Z) - Exploration Based Language Learning for Text-Based Games [72.30525050367216]
This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.
Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text.
These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents.
arXiv Detail & Related papers (2020-01-24T03:03:51Z) - Graph Constrained Reinforcement Learning for Natural Language Action
Spaces [9.87327247830837]
Interactive Fiction games are text-based simulations in which an agent interacts with the world purely through natural language.
We present KG-A2C, an agent that builds a dynamic knowledge graph while exploring and generates actions using a template-based action space.
arXiv Detail & Related papers (2020-01-23T22:33:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.