Generalization in Text-based Games via Hierarchical Reinforcement
Learning
- URL: http://arxiv.org/abs/2109.09968v1
- Date: Tue, 21 Sep 2021 05:27:33 GMT
- Title: Generalization in Text-based Games via Hierarchical Reinforcement
Learning
- Authors: Yunqiu Xu, Meng Fang, Ling Chen, Yali Du and Chengqi Zhang
- Abstract summary: We introduce a hierarchical framework built upon the knowledge graph-based RL agent.
In the high level, a meta-policy is executed to decompose the whole game into a set of subtasks specified by textual goals.
In the low level, a sub-policy is executed to conduct goal-conditioned reinforcement learning.
- Score: 42.70991837415775
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep reinforcement learning provides a promising approach for text-based
games in studying natural language communication between humans and artificial
agents. However, the generalization still remains a big challenge as the agents
depend critically on the complexity and variety of training tasks. In this
paper, we address this problem by introducing a hierarchical framework built
upon the knowledge graph-based RL agent. In the high level, a meta-policy is
executed to decompose the whole game into a set of subtasks specified by
textual goals, and select one of them based on the KG. Then a sub-policy in the
low level is executed to conduct goal-conditioned reinforcement learning. We
carry out experiments on games with various difficulty levels and show that the
proposed method enjoys favorable generalizability.
Related papers
- Learning Symbolic Rules over Abstract Meaning Representations for
Textual Reinforcement Learning [63.148199057487226]
We propose a modular, NEuroSymbolic Textual Agent (NESTA) that combines a generic semantic generalization with a rule induction system to learn interpretable rules as policies.
Our experiments show that the proposed NESTA method outperforms deep reinforcement learning-based techniques by achieving better to unseen test games and learning from fewer training interactions.
arXiv Detail & Related papers (2023-07-05T23:21:05Z) - SPRING: Studying the Paper and Reasoning to Play Games [102.5587155284795]
We propose a novel approach, SPRING, to read the game's original academic paper and use the knowledge learned to reason and play the game through a large language model (LLM)
In experiments, we study the quality of in-context "reasoning" induced by different forms of prompts under the setting of the Crafter open-world environment.
Our experiments suggest that LLMs, when prompted with consistent chain-of-thought, have great potential in completing sophisticated high-level trajectories.
arXiv Detail & Related papers (2023-05-24T18:14:35Z) - Entity Divider with Language Grounding in Multi-Agent Reinforcement
Learning [28.619845209653274]
We investigate the use of natural language to drive the generalization of policies in multi-agent settings.
We propose a novel framework for language grounding in multi-agent reinforcement learning, entity divider (EnDi)
EnDi enables agents to independently learn subgoal division at the entity level and act in the environment based on the associated entities.
arXiv Detail & Related papers (2022-10-25T11:53:52Z) - Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games [64.11746320061965]
We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
arXiv Detail & Related papers (2020-10-22T12:40:22Z) - How to Avoid Being Eaten by a Grue: Structured Exploration Strategies
for Textual Worlds [16.626095390308304]
We introduce Q*BERT, an agent that learns to build a knowledge graph of the world by answering questions.
We further introduce MC!Q*BERT an agent that uses a knowledge-graph-based intrinsic motivation to detect bottlenecks.
We present an ablation study and results demonstrating how our method outperforms the current state-of-the-art on nine text games.
arXiv Detail & Related papers (2020-06-12T18:24:06Z) - Learning from Learners: Adapting Reinforcement Learning Agents to be
Competitive in a Card Game [71.24825724518847]
We present a study on how popular reinforcement learning algorithms can be adapted to learn and to play a real-world implementation of a competitive multiplayer card game.
We propose specific training and validation routines for the learning agents, in order to evaluate how the agents learn to be competitive and explain how they adapt to each others' playing style.
arXiv Detail & Related papers (2020-04-08T14:11:05Z) - Learning Dynamic Belief Graphs to Generalize on Text-Based Games [55.59741414135887]
Playing text-based games requires skills in processing natural language and sequential decision making.
In this work, we investigate how an agent can plan and generalize in text-based games using graph-structured representations learned end-to-end from raw text.
arXiv Detail & Related papers (2020-02-21T04:38:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.