Reading and Acting while Blindfolded: The Need for Semantics in Text
Game Agents
- URL: http://arxiv.org/abs/2103.13552v1
- Date: Thu, 25 Mar 2021 01:35:27 GMT
- Title: Reading and Acting while Blindfolded: The Need for Semantics in Text
Game Agents
- Authors: Shunyu Yao, Karthik Narasimhan, Matthew Hausknecht
- Abstract summary: It remains unclear to what extent artificial agents utilize semantic understanding of the text.
We propose an inverse dynamics decoder to regularize the representation space and encourage exploration.
We discuss the implications of our findings for designing future agents with stronger semantic understanding.
- Score: 18.743819704859703
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Text-based games simulate worlds and interact with players using natural
language. Recent work has used them as a testbed for autonomous
language-understanding agents, with the motivation being that understanding the
meanings of words or semantics is a key component of how humans understand,
reason, and act in these worlds. However, it remains unclear to what extent
artificial agents utilize semantic understanding of the text. To this end, we
perform experiments to systematically reduce the amount of semantic information
available to a learning agent. Surprisingly, we find that an agent is capable
of achieving high scores even in the complete absence of language semantics,
indicating that the currently popular experimental setup and models may be
poorly designed to understand and leverage game texts. To remedy this
deficiency, we propose an inverse dynamics decoder to regularize the
representation space and encourage exploration, which shows improved
performance on several games including Zork I. We discuss the implications of
our findings for designing future agents with stronger semantic understanding.
Related papers
- AMONGAGENTS: Evaluating Large Language Models in the Interactive Text-Based Social Deduction Game [12.384945632524424]
This paper focuses on creating proxies of human behavior in simulated environments, with Among Us utilized as a tool for studying simulated human behavior.
Our work demonstrates that state-of-the-art large language models (LLMs) can effectively grasp the game rules and make decisions based on the current context.
arXiv Detail & Related papers (2024-07-23T14:34:38Z) - Symbolic Learning Enables Self-Evolving Agents [55.625275970720374]
We introduce agent symbolic learning, a systematic framework that enables language agents to optimize themselves on their own.
Agent symbolic learning is designed to optimize the symbolic network within language agents by mimicking two fundamental algorithms in connectionist learning.
We conduct proof-of-concept experiments on both standard benchmarks and complex real-world tasks.
arXiv Detail & Related papers (2024-06-26T17:59:18Z) - On the Effects of Fine-tuning Language Models for Text-Based Reinforcement Learning [19.057241328691077]
We show that rich semantic understanding leads to efficient training of text-based RL agents.
We describe the occurrence of semantic degeneration as a consequence of inappropriate fine-tuning of language models.
arXiv Detail & Related papers (2024-04-15T23:05:57Z) - Learning to Model the World with Language [100.76069091703505]
To interact with humans and act in the world, agents need to understand the range of language that people use and relate it to the visual world.
Our key idea is that agents should interpret such diverse language as a signal that helps them predict the future.
We instantiate this in Dynalang, an agent that learns a multimodal world model to predict future text and image representations.
arXiv Detail & Related papers (2023-07-31T17:57:49Z) - Inherently Explainable Reinforcement Learning in Natural Language [14.117921448623342]
We focus on the task of creating a reinforcement learning agent that is inherently explainable.
This Hierarchically Explainable Reinforcement Learning agent operates in Interactive Fictions, text-based game environments.
Our agent is designed to treat explainability as a first-class citizen.
arXiv Detail & Related papers (2021-12-16T14:24:35Z) - Deep Reinforcement Learning with Stacked Hierarchical Attention for
Text-based Games [64.11746320061965]
We study reinforcement learning for text-based games, which are interactive simulations in the context of natural language.
We aim to conduct explicit reasoning with knowledge graphs for decision making, so that the actions of an agent are generated and supported by an interpretable inference procedure.
We extensively evaluate our method on a number of man-made benchmark games, and the experimental results demonstrate that our method performs better than existing text-based agents.
arXiv Detail & Related papers (2020-10-22T12:40:22Z) - Interactive Fiction Game Playing as Multi-Paragraph Reading
Comprehension with Reinforcement Learning [94.50608198582636]
Interactive Fiction (IF) games with real human-written natural language texts provide a new natural evaluation for language understanding techniques.
We take a novel perspective of IF game solving and re-formulate it as Multi-Passage Reading (MPRC) tasks.
arXiv Detail & Related papers (2020-10-05T23:09:20Z) - Learning Dynamic Belief Graphs to Generalize on Text-Based Games [55.59741414135887]
Playing text-based games requires skills in processing natural language and sequential decision making.
In this work, we investigate how an agent can plan and generalize in text-based games using graph-structured representations learned end-to-end from raw text.
arXiv Detail & Related papers (2020-02-21T04:38:37Z) - Exploration Based Language Learning for Text-Based Games [72.30525050367216]
This work presents an exploration and imitation-learning-based agent capable of state-of-the-art performance in playing text-based computer games.
Text-based computer games describe their world to the player through natural language and expect the player to interact with the game using text.
These games are of interest as they can be seen as a testbed for language understanding, problem-solving, and language generation by artificial agents.
arXiv Detail & Related papers (2020-01-24T03:03:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.