Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games
- URL: http://arxiv.org/abs/2010.11655v3
- Date: Fri, 25 Dec 2020 06:38:23 GMT
- Title: Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games
- Authors: Yunqiu Xu, Meng Fang, Ling Chen, Yali Du, Joey Tianyi Zhou, Chengqi
Zhang
- Abstract summary: We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
- Score: 64.11746320061965
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We study reinforcement learning (RL) for text-based games, which are
interactive simulations in the context of natural language. While different
methods have been developed to represent the environment information and
language actions, existing RL agents are not empowered with any reasoning
capabilities to deal with textual games. In this work, we aim to conduct
explicit reasoning with knowledge graphs for decision making, so that the
actions of an agent are generated and supported by an interpretable inference
procedure. We propose a stacked hierarchical attention mechanism to construct
an explicit representation of the reasoning process by exploiting the structure
of the knowledge graph. We extensively evaluate our method on a number of
man-made benchmark games, and the experimental results demonstrate that our
method performs better than existing text-based agents.
Related papers
- On the Effects of Fine-tuning Language Models for Text-Based Reinforcement Learning [19.057241328691077]
We show that rich semantic understanding leads to efficient training of text-based RL agents.
We describe the occurrence of semantic degeneration as a consequence of inappropriate fine-tuning of language models.
arXiv Detail & Related papers (2024-04-15T23:05:57Z) - Learning Symbolic Rules over Abstract Meaning Representations for
Textual Reinforcement Learning [63.148199057487226]
We propose a modular, NEuroSymbolic Textual Agent (NESTA) that combines a generic semantic generalization with a rule induction system to learn interpretable rules as policies.
Our experiments show that the proposed NESTA method outperforms deep reinforcement learning-based techniques by achieving better to unseen test games and learning from fewer training interactions.
arXiv Detail & Related papers (2023-07-05T23:21:05Z) - Knowledge-enhanced Agents for Interactive Text Games [16.055119735473017]
We propose a knowledge-injection framework for improved functional grounding of agents in text-based games.
We consider two forms of domain knowledge that we inject into learning-based agents: memory of previous correct actions and affordances of relevant objects in the environment.
Our framework supports two representative model classes: reinforcement learning agents and language model agents.
arXiv Detail & Related papers (2023-05-08T23:31:39Z) - Inherently Explainable Reinforcement Learning in Natural Language [14.117921448623342]
We focus on the task of creating a reinforcement learning agent that is inherently explainable.
This Hierarchically Explainable Reinforcement Learning agent operates in Interactive Fictions, text-based game environments.
Our agent is designed to treat explainability as a first-class citizen.
arXiv Detail & Related papers (2021-12-16T14:24:35Z) - LOA: Logical Optimal Actions for Text-based Interaction Games [63.003353499732434]
We present Logical Optimal Actions (LOA), an action decision architecture of reinforcement learning applications.
LOA is a combination of neural network and symbolic knowledge acquisition approach for natural language interaction games.
arXiv Detail & Related papers (2021-10-21T08:36:11Z) - Generalization in Text-based Games via Hierarchical Reinforcement
Learning [42.70991837415775]
We introduce a hierarchical framework built upon the knowledge graph-based RL agent.
In the high level, a meta-policy is executed to decompose the whole game into a set of subtasks specified by textual goals.
In the low level, a sub-policy is executed to conduct goal-conditioned reinforcement learning.
arXiv Detail & Related papers (2021-09-21T05:27:33Z) - Interactive Fiction Game Playing as Multi-Paragraph Reading
Comprehension with Reinforcement Learning [94.50608198582636]
Interactive Fiction (IF) games with real human-written natural language texts provide a new natural evaluation for language understanding techniques.
We take a novel perspective of IF game solving and re-formulate it as Multi-Passage Reading (MPRC) tasks.
arXiv Detail & Related papers (2020-10-05T23:09:20Z) - Learning Dynamic Belief Graphs to Generalize on Text-Based Games [55.59741414135887]
Playing text-based games requires skills in processing natural language and sequential decision making.
In this work, we investigate how an agent can plan and generalize in text-based games using graph-structured representations learned end-to-end from raw text.
arXiv Detail & Related papers (2020-02-21T04:38:37Z) - Exploration Based Language Learning for Text-Based Games [72.30525050367216]
This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.
Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text.
These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents.
arXiv Detail & Related papers (2020-01-24T03:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.