Integrating Diverse Knowledge Sources for Online One-shot Learning of
Novel Tasks
- URL: http://arxiv.org/abs/2208.09554v3
- Date: Mon, 15 May 2023 16:34:58 GMT
- Title: Integrating Diverse Knowledge Sources for Online One-shot Learning of
Novel Tasks
- Authors: James R. Kirk, Robert E. Wray, Peter Lindes, John E. Laird
- Abstract summary: We investigate the challenges and impact of exploiting diverse knowledge sources to learn online, in one-shot, new tasks for a simulated office mobile robot.
The resulting agent, developed in the Soar cognitive architecture, uses the following sources of domain and task knowledge.
Results show that an agent's online integration of diverse knowledge sources improves one-shot task learning overall.
- Score: 6.021787236982658
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Autonomous agents are able to draw on a wide variety of potential sources of
task knowledge; however current approaches invariably focus on only one or two.
Here we investigate the challenges and impact of exploiting diverse knowledge
sources to learn online, in one-shot, new tasks for a simulated office mobile
robot. The resulting agent, developed in the Soar cognitive architecture, uses
the following sources of domain and task knowledge: interaction with the
environment, task execution and search knowledge, human natural language
instruction, and responses retrieved from a large language model (GPT-3). We
explore the distinct contributions of these knowledge sources and evaluate the
performance of different combinations in terms of learning correct task
knowledge and human workload. Results show that an agent's online integration
of diverse knowledge sources improves one-shot task learning overall, reducing
human feedback needed for rapid and reliable task learning.
Related papers
- Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Transferring Knowledge for Reinforcement Learning in Contact-Rich
Manipulation [10.219833196479142]
We address the challenge of transferring knowledge within a family of similar tasks by leveraging multiple skill priors.
Our method learns a latent action space representing the skill embedding from demonstrated trajectories for each prior task.
We have evaluated our method on a set of peg-in-hole insertion tasks and demonstrate better generalization to new tasks that have never been encountered during training.
arXiv Detail & Related papers (2022-09-19T10:31:13Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on
Continual Learning and Functional Composition [26.524289609910653]
A major goal of artificial intelligence (AI) is to create an agent capable of acquiring a general understanding of the world.
Lifelong or continual learning addresses this setting, whereby an agent faces a continual stream of problems and must strive to capture the knowledge necessary for solving each new task it encounters.
Despite the intuitive appeal of this simple idea, the literatures on lifelong learning and compositional learning have proceeded largely separately.
arXiv Detail & Related papers (2022-07-15T19:53:20Z) - Knowledge-Grounded Dialogue Generation with a Unified Knowledge
Representation [78.85622982191522]
Existing systems perform poorly on unseen topics due to limited topics covered in the training data.
We present PLUG, a language model that homogenizes different knowledge sources to a unified knowledge representation.
It can achieve comparable performance with state-of-the-art methods under a fully-supervised setting.
arXiv Detail & Related papers (2021-12-15T07:11:02Z) - Discovering Generalizable Skills via Automated Generation of Diverse
Tasks [82.16392072211337]
We propose a method to discover generalizable skills via automated generation of a diverse set of tasks.
As opposed to prior work on unsupervised discovery of skills, our method pairs each skill with a unique task produced by a trainable task generator.
A task discriminator defined on the robot behaviors in the generated tasks is jointly trained to estimate the evidence lower bound of the diversity objective.
The learned skills can then be composed in a hierarchical reinforcement learning algorithm to solve unseen target tasks.
arXiv Detail & Related papers (2021-06-26T03:41:51Z) - Efficient and robust multi-task learning in the brain with modular task
primitives [2.6166087473624318]
We show that a modular network endowed with task primitives allows for learning multiple tasks well while keeping parameter counts, and updates, low.
We also show that the skills acquired with our approach are more robust to a broad range of perturbations compared to those acquired with other multi-task learning strategies.
arXiv Detail & Related papers (2021-05-28T21:07:54Z) - Latent Skill Planning for Exploration and Transfer [49.25525932162891]
In this paper, we investigate how these two approaches can be integrated into a single reinforcement learning agent.
We leverage the idea of partial amortization for fast adaptation at test time.
We demonstrate the benefits of our design decisions across a suite of challenging locomotion tasks.
arXiv Detail & Related papers (2020-11-27T18:40:03Z) - Knowledge-driven Data Construction for Zero-shot Evaluation in
Commonsense Question Answering [80.60605604261416]
We propose a novel neuro-symbolic framework for zero-shot question answering across commonsense tasks.
We vary the set of language models, training regimes, knowledge sources, and data generation strategies, and measure their impact across tasks.
We show that, while an individual knowledge graph is better suited for specific tasks, a global knowledge graph brings consistent gains across different tasks.
arXiv Detail & Related papers (2020-11-07T22:52:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.