How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on
Continual Learning and Functional Composition
- URL: http://arxiv.org/abs/2207.07730v2
- Date: Tue, 13 Jun 2023 17:56:37 GMT
- Title: How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on
Continual Learning and Functional Composition
- Authors: Jorge A. Mendez and Eric Eaton
- Abstract summary: A major goal of artificial intelligence (AI) is to create an agent capable of acquiring a general understanding of the world.
Lifelong or continual learning addresses this setting, whereby an agent faces a continual stream of problems and must strive to capture the knowledge necessary for solving each new task it encounters.
Despite the intuitive appeal of this simple idea, the literatures on lifelong learning and compositional learning have proceeded largely separately.
- Score: 26.524289609910653
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: A major goal of artificial intelligence (AI) is to create an agent capable of
acquiring a general understanding of the world. Such an agent would require the
ability to continually accumulate and build upon its knowledge as it encounters
new experiences. Lifelong or continual learning addresses this setting, whereby
an agent faces a continual stream of problems and must strive to capture the
knowledge necessary for solving each new task it encounters. If the agent is
capable of accumulating knowledge in some form of compositional representation,
it could then selectively reuse and combine relevant pieces of knowledge to
construct novel solutions. Despite the intuitive appeal of this simple idea,
the literatures on lifelong learning and compositional learning have proceeded
largely separately. In an effort to promote developments that bridge between
the two fields, this article surveys their respective research landscapes and
discusses existing and future connections between them.
Related papers
- Beyond Factuality: A Comprehensive Evaluation of Large Language Models
as Knowledge Generators [78.63553017938911]
Large language models (LLMs) outperform information retrieval techniques for downstream knowledge-intensive tasks.
However, community concerns abound regarding the factuality and potential implications of using this uncensored knowledge.
We introduce CONNER, designed to evaluate generated knowledge from six important perspectives.
arXiv Detail & Related papers (2023-10-11T08:22:37Z) - Continual Learning as Computationally Constrained Reinforcement Learning [23.88768480916785]
An agent that efficiently accumulates knowledge to develop increasingly sophisticated skills over a long lifetime could advance the frontier of artificial intelligence capabilities.
The design of such agents, which remains a long-standing challenge of artificial intelligence, is addressed by the subject of continual learning.
arXiv Detail & Related papers (2023-07-10T05:06:41Z) - A Comprehensive Survey of Continual Learning: Theory, Method and
Application [64.23253420555989]
We present a comprehensive survey of continual learning, seeking to bridge the basic settings, theoretical foundations, representative methods, and practical applications.
We summarize the general objectives of continual learning as ensuring a proper stability-plasticity trade-off and an adequate intra/inter-task generalizability in the context of resource efficiency.
arXiv Detail & Related papers (2023-01-31T11:34:56Z) - Unveiling the Tapestry: the Interplay of Generalization and Forgetting in Continual Learning [18.61040106667249]
In AI, generalization refers to a model's ability to perform well on out-of-distribution data related to a given task, beyond the data it was trained on.
Continual learning methods often include mechanisms to mitigate catastrophic forgetting, ensuring that knowledge from earlier tasks is retained.
We introduce a simple and effective technique known as Shape-Texture Consistency Regularization (STCR), which caters to continual learning.
arXiv Detail & Related papers (2022-11-21T04:36:24Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Integrating Diverse Knowledge Sources for Online One-shot Learning of
Novel Tasks [6.021787236982658]
We investigate the challenges and impact of exploiting diverse knowledge sources to learn online, in one-shot, new tasks for a simulated office mobile robot.
The resulting agent, developed in the Soar cognitive architecture, uses the following sources of domain and task knowledge.
Results show that an agent's online integration of diverse knowledge sources improves one-shot task learning overall.
arXiv Detail & Related papers (2022-08-19T21:53:15Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem
Solving [104.79156980475686]
Humans learn compositional and causal abstraction, ie, knowledge, in response to the structure of naturalistic tasks.
We argue there shall be three levels of generalization in how an agent represents its knowledge: perceptual, conceptual, and algorithmic.
This benchmark is centered around a novel task domain, HALMA, for visual concept development and rapid problem-solving.
arXiv Detail & Related papers (2021-02-22T20:37:01Z) - Latent Skill Planning for Exploration and Transfer [49.25525932162891]
In this paper, we investigate how these two approaches can be integrated into a single reinforcement learning agent.
We leverage the idea of partial amortization for fast adaptation at test time.
We demonstrate the benefits of our design decisions across a suite of challenging locomotion tasks.
arXiv Detail & Related papers (2020-11-27T18:40:03Z) - Lifelong Learning of Compositional Structures [26.524289609910653]
We present a general-purpose framework for lifelong learning of compositional structures.
Our framework separates the learning process into two broad stages: learning how to best combine existing components in order to assimilate a novel problem, and learning how to adapt the set of existing components to accommodate the new problem.
arXiv Detail & Related papers (2020-07-15T14:58:48Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.