Asking for Knowledge: Training RL Agents to Query External Knowledge
Using Language
- URL: http://arxiv.org/abs/2205.06111v1
- Date: Thu, 12 May 2022 14:20:31 GMT
- Title: Asking for Knowledge: Training RL Agents to Query External Knowledge
Using Language
- Authors: Iou-Jen Liu, Xingdi Yuan, Marc-Alexandre C\^ot\'e, Pierre-Yves
Oudeyer, Alexander G. Schwing
- Abstract summary: We introduce two new environments: the grid-world-based Q-BabyAI and the text-based Q-TextWorld.
We propose the "Asking for Knowledge" (AFK) agent, which learns to generate language commands to query for meaningful knowledge.
- Score: 121.56329458876655
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: To solve difficult tasks, humans ask questions to acquire knowledge from
external sources. In contrast, classical reinforcement learning agents lack
such an ability and often resort to exploratory behavior. This is exacerbated
as few present-day environments support querying for knowledge. In order to
study how agents can be taught to query external knowledge via language, we
first introduce two new environments: the grid-world-based Q-BabyAI and the
text-based Q-TextWorld. In addition to physical interactions, an agent can
query an external knowledge source specialized for these environments to gather
information. Second, we propose the "Asking for Knowledge" (AFK) agent, which
learns to generate language commands to query for meaningful knowledge that
helps solve the tasks. AFK leverages a non-parametric memory, a pointer
mechanism and an episodic exploration bonus to tackle (1) a large query
language space, (2) irrelevant information, (3) delayed reward for making
meaningful queries. Extensive experiments demonstrate that the AFK agent
outperforms recent baselines on the challenging Q-BabyAI and Q-TextWorld
environments.
Related papers
- DIVKNOWQA: Assessing the Reasoning Ability of LLMs via Open-Domain
Question Answering over Knowledge Base and Text [73.68051228972024]
Large Language Models (LLMs) have exhibited impressive generation capabilities, but they suffer from hallucinations when relying on their internal knowledge.
Retrieval-augmented LLMs have emerged as a potential solution to ground LLMs in external knowledge.
arXiv Detail & Related papers (2023-10-31T04:37:57Z) - Ask Before You Act: Generalising to Novel Environments by Asking
Questions [0.0]
We investigate the ability of an RL agent to learn to ask natural language questions as a tool to understand its environment.
We do this by endowing this agent with the ability of asking "yes-no" questions to an all-knowing Oracle.
We observe a significant increase in generalisation performance compared to a baseline agent unable to ask questions.
arXiv Detail & Related papers (2022-09-10T13:17:21Z) - LaKo: Knowledge-driven Visual Question Answering via Late
Knowledge-to-Text Injection [30.65373229617201]
We propose LaKo, a knowledge-driven VQA method via Late Knowledge-to-text Injection.
To effectively incorporate an external KG, we transfer triples into text and propose a late injection mechanism.
In the evaluation with OKVQA datasets, our method achieves state-of-the-art results.
arXiv Detail & Related papers (2022-07-26T13:29:51Z) - OPERA: Harmonizing Task-Oriented Dialogs and Information Seeking
Experience [87.0233567695073]
Existing studies in conversational AI mostly treat task-oriented dialog (TOD) and question answering (QA) as separate tasks.
We propose a new task, Open-Book TOD (OB-TOD), which combines TOD with QA task and expand external knowledge sources.
We propose a unified model OPERA which can appropriately access explicit and implicit external knowledge to tackle the defined task.
arXiv Detail & Related papers (2022-06-24T18:21:26Z) - Learning to Query Internet Text for Informing Reinforcement Learning
Agents [36.69880704465014]
We tackle the problem of extracting useful information from natural language found in the wild.
We train reinforcement learning agents to learn to query these sources as a human would.
We show that our method correctly learns to execute queries to maximize reward in a reinforcement learning setting.
arXiv Detail & Related papers (2022-05-25T23:07:10Z) - TegTok: Augmenting Text Generation via Task-specific and Open-world
Knowledge [83.55215993730326]
We propose augmenting TExt Generation via Task-specific and Open-world Knowledge (TegTok) in a unified framework.
Our model selects knowledge entries from two types of knowledge sources through dense retrieval and then injects them into the input encoding and output decoding stages respectively.
arXiv Detail & Related papers (2022-03-16T10:37:59Z) - Enhancing Question Generation with Commonsense Knowledge [33.289599417096206]
We propose a multi-task learning framework to introduce commonsense knowledge into question generation process.
Experimental results on SQuAD show that our proposed methods are able to noticeably improve the QG performance on both automatic and human evaluation metrics.
arXiv Detail & Related papers (2021-06-19T08:58:13Z) - Contextualized Knowledge-aware Attentive Neural Network: Enhancing
Answer Selection with Knowledge [77.77684299758494]
We extensively investigate approaches to enhancing the answer selection model with external knowledge from knowledge graph (KG)
First, we present a context-knowledge interaction learning framework, Knowledge-aware Neural Network (KNN), which learns the QA sentence representations by considering a tight interaction with the external knowledge from KG and the textual information.
To handle the diversity and complexity of KG information, we propose a Contextualized Knowledge-aware Attentive Neural Network (CKANN), which improves the knowledge representation learning with structure information via a customized Graph Convolutional Network (GCN) and comprehensively learns context-based and knowledge-based sentence representation via
arXiv Detail & Related papers (2021-04-12T05:52:20Z) - KRISP: Integrating Implicit and Symbolic Knowledge for Open-Domain
Knowledge-Based VQA [107.7091094498848]
One of the most challenging question types in VQA is when answering the question requires outside knowledge not present in the image.
In this work we study open-domain knowledge, the setting when the knowledge required to answer a question is not given/annotated, neither at training nor test time.
We tap into two types of knowledge representations and reasoning. First, implicit knowledge which can be learned effectively from unsupervised language pre-training and supervised training data with transformer-based models.
arXiv Detail & Related papers (2020-12-20T20:13:02Z) - Question Answering over Knowledge Base using Language Model Embeddings [0.0]
This paper focuses on using a pre-trained language model for the Knowledge Base Question Answering task.
We further fine-tuned these embeddings with a two-way attention mechanism from the knowledge base to the asked question.
Our method is based on a simple Convolutional Neural Network architecture with a Multi-Head Attention mechanism to represent the asked question.
arXiv Detail & Related papers (2020-10-17T22:59:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.