How to Motivate Your Dragon: Teaching Goal-Driven Agents to Speak and
Act in Fantasy Worlds
- URL: http://arxiv.org/abs/2010.00685v3
- Date: Tue, 25 May 2021 15:26:16 GMT
- Title: How to Motivate Your Dragon: Teaching Goal-Driven Agents to Speak and
Act in Fantasy Worlds
- Authors: Prithviraj Ammanabrolu, Jack Urbanek, Margaret Li, Arthur Szlam, Tim
Rockt\"aschel, Jason Weston
- Abstract summary: We seek to create agents that both act and communicate with other agents in pursuit of a goal.
We introduce a reinforcement learning system that incorporates large-scale language modeling-based and commonsense reasoning-based pre-training.
We conduct zero-shot evaluations using held-out human expert demonstrations, showing that our agents are able to act consistently and talk naturally with respect to their motivations.
- Score: 47.7511759322784
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We seek to create agents that both act and communicate with other agents in
pursuit of a goal. Towards this end, we extend LIGHT (Urbanek et al. 2019) -- a
large-scale crowd-sourced fantasy text-game -- with a dataset of quests. These
contain natural language motivations paired with in-game goals and human
demonstrations; completing a quest might require dialogue or actions (or both).
We introduce a reinforcement learning system that (1) incorporates large-scale
language modeling-based and commonsense reasoning-based pre-training to imbue
the agent with relevant priors; and (2) leverages a factorized action space of
action commands and dialogue, balancing between the two. We conduct zero-shot
evaluations using held-out human expert demonstrations, showing that our agents
are able to act consistently and talk naturally with respect to their
motivations.
Related papers
- STARLING: Self-supervised Training of Text-based Reinforcement Learning Agent with Large Language Models [5.786039929801102]
Existing environments for interactive fiction games are domain-specific or time-consuming to generate and do not train the RL agents to master a specific set of skills.
We introduce an interactive environment for self-supervised RL, STARLING, for text-based games that bootstraps the text-based RL agents with automatically generated games to boost the performance and generalization capabilities to reach a goal of the target environment.
arXiv Detail & Related papers (2024-06-09T18:07:47Z) - Zero-Shot Goal-Directed Dialogue via RL on Imagined Conversations [70.7884839812069]
Large language models (LLMs) have emerged as powerful and general solutions to many natural language tasks.
However, many of the most important applications of language generation are interactive, where an agent has to talk to a person to reach a desired outcome.
In this work, we explore a new method for adapting LLMs with RL for such goal-directed dialogue.
arXiv Detail & Related papers (2023-11-09T18:45:16Z) - I Cast Detect Thoughts: Learning to Converse and Guide with Intents and
Theory-of-Mind in Dungeons and Dragons [82.28503603235364]
We study teacher-student natural language interactions in a goal-driven environment in Dungeons and Dragons.
Our approach is to decompose and model these interactions into (1) the Dungeon Master's intent to guide players toward a given goal; (2) the DM's guidance utterance to the players expressing this intent; and (3) a theory-of-mind (ToM) model that anticipates the players' reaction to the guidance one turn into the future.
arXiv Detail & Related papers (2022-12-20T08:06:55Z) - Werewolf Among Us: A Multimodal Dataset for Modeling Persuasion
Behaviors in Social Deduction Games [45.55448048482881]
We introduce the first multimodal dataset for modeling persuasion behaviors.
Our dataset includes 199 dialogue transcriptions and videos, 26,647 utterance level annotations of persuasion strategy, and game level annotations of deduction game outcomes.
arXiv Detail & Related papers (2022-12-16T04:52:53Z) - Few-shot Language Coordination by Modeling Theory of Mind [95.54446989205117]
We study the task of few-shot $textitlanguage coordination$.
We require the lead agent to coordinate with a $textitpopulation$ of agents with different linguistic abilities.
This requires the ability to model the partner's beliefs, a vital component of human communication.
arXiv Detail & Related papers (2021-07-12T19:26:11Z) - Keep CALM and Explore: Language Models for Action Generation in
Text-based Games [27.00685301984832]
We propose the Contextual Action Language Model (CALM) to generate a compact set of action candidates at each game state.
We combine CALM with a reinforcement learning agent which re-ranks the generated action candidates to maximize in-game rewards.
arXiv Detail & Related papers (2020-10-06T17:36:29Z) - Will I Sound Like Me? Improving Persona Consistency in Dialogues through
Pragmatic Self-Consciousness [62.55060760615656]
Recent models tackling consistency often train with additional Natural Language Inference (NLI) labels or attach trained extra modules to the generative agent for maintaining consistency.
Inspired by social cognition and pragmatics, we endow existing dialogue agents with public self-consciousness on the fly through an imaginary listener.
Our approach, based on the Rational Speech Acts framework, can enforce dialogue agents to refrain from uttering contradiction.
arXiv Detail & Related papers (2020-04-13T08:16:16Z) - I love your chain mail! Making knights smile in a fantasy game world:
Open-domain goal-oriented dialogue agents [69.68400056148336]
We train a goal-oriented model with reinforcement learning against an imitation-learned chit-chat'' model.
We show that both models outperform an inverse model baseline and can converse naturally with their dialogue partner in order to achieve goals.
arXiv Detail & Related papers (2020-02-07T16:22:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.