Continual Learning as Computationally Constrained Reinforcement Learning
- URL: http://arxiv.org/abs/2307.04345v2
- Date: Sun, 20 Aug 2023 19:58:49 GMT
- Title: Continual Learning as Computationally Constrained Reinforcement Learning
- Authors: Saurabh Kumar, Henrik Marklund, Ashish Rao, Yifan Zhu, Hong Jun Jeon,
Yueyang Liu, and Benjamin Van Roy
- Abstract summary: An agent that efficiently accumulates knowledge to develop increasingly sophisticated skills over a long lifetime could advance the frontier of artificial intelligence capabilities.
The design of such agents, which remains a long-standing challenge of artificial intelligence, is addressed by the subject of continual learning.
- Score: 23.88768480916785
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: An agent that efficiently accumulates knowledge to develop increasingly
sophisticated skills over a long lifetime could advance the frontier of
artificial intelligence capabilities. The design of such agents, which remains
a long-standing challenge of artificial intelligence, is addressed by the
subject of continual learning. This monograph clarifies and formalizes concepts
of continual learning, introducing a framework and set of tools to stimulate
further research.
Related papers
- A Comprehensive Survey of Continual Learning: Theory, Method and
Application [64.23253420555989]
We present a comprehensive survey of continual learning, seeking to bridge the basic settings, theoretical foundations, representative methods, and practical applications.
We summarize the general objectives of continual learning as ensuring a proper stability-plasticity trade-off and an adequate intra/inter-task generalizability in the context of resource efficiency.
arXiv Detail & Related papers (2023-01-31T11:34:56Z) - Learning and Retrieval from Prior Data for Skill-based Imitation
Learning [47.59794569496233]
We develop a skill-based imitation learning framework that extracts temporally extended sensorimotor skills from prior data.
We identify several key design choices that significantly improve performance on novel tasks.
arXiv Detail & Related papers (2022-10-20T17:34:59Z) - How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on
Continual Learning and Functional Composition [26.524289609910653]
A major goal of artificial intelligence (AI) is to create an agent capable of acquiring a general understanding of the world.
Lifelong or continual learning addresses this setting, whereby an agent faces a continual stream of problems and must strive to capture the knowledge necessary for solving each new task it encounters.
Despite the intuitive appeal of this simple idea, the literatures on lifelong learning and compositional learning have proceeded largely separately.
arXiv Detail & Related papers (2022-07-15T19:53:20Z) - Learning Temporally Extended Skills in Continuous Domains as Symbolic
Actions for Planning [2.642698101441705]
Problems which require both long-horizon planning and continuous control capabilities pose significant challenges to existing reinforcement learning agents.
We introduce a novel hierarchical reinforcement learning agent which links temporally extended skills for continuous control with a forward model in a symbolic abstraction of the environment's state for planning.
arXiv Detail & Related papers (2022-07-11T17:13:10Z) - From Psychological Curiosity to Artificial Curiosity: Curiosity-Driven
Learning in Artificial Intelligence Tasks [56.20123080771364]
Psychological curiosity plays a significant role in human intelligence to enhance learning through exploration and information acquisition.
In the Artificial Intelligence (AI) community, artificial curiosity provides a natural intrinsic motivation for efficient learning.
CDL has become increasingly popular, where agents are self-motivated to learn novel knowledge.
arXiv Detail & Related papers (2022-01-20T17:07:03Z) - Transferability in Deep Learning: A Survey [80.67296873915176]
The ability to acquire and reuse knowledge is known as transferability in deep learning.
We present this survey to connect different isolated areas in deep learning with their relation to transferability.
We implement a benchmark and an open-source library, enabling a fair evaluation of deep learning methods in terms of transferability.
arXiv Detail & Related papers (2022-01-15T15:03:17Z) - Recent Advances of Continual Learning in Computer Vision: An Overview [16.451358332033532]
Continual learning is similar to the human learning process with the ability of learning, fusing, and accumulating new knowledge coming at different time steps.
We present a comprehensive review of the recent progress of continual learning in computer vision.
arXiv Detail & Related papers (2021-09-23T13:30:18Z) - Preserving Earlier Knowledge in Continual Learning with the Help of All
Previous Feature Extractors [63.21036904487014]
Continual learning of new knowledge over time is one desirable capability for intelligent systems to recognize more and more classes of objects.
We propose a simple yet effective fusion mechanism by including all the previously learned feature extractors into the intelligent model.
Experiments on multiple classification tasks show that the proposed approach can effectively reduce the forgetting of old knowledge, achieving state-of-the-art continual learning performance.
arXiv Detail & Related papers (2021-04-28T07:49:24Z) - KnowRU: Knowledge Reusing via Knowledge Distillation in Multi-agent
Reinforcement Learning [16.167201058368303]
Deep Reinforcement Learning (RL) algorithms have achieved dramatically progress in the multi-agent area.
To alleviate this problem, efficient leveraging of the historical experience is essential.
We propose a method, named "KnowRU" for knowledge reusing.
arXiv Detail & Related papers (2021-03-27T12:38:01Z) - Transfer Learning in Deep Reinforcement Learning: A Survey [64.36174156782333]
Reinforcement learning is a learning paradigm for solving sequential decision-making problems.
Recent years have witnessed remarkable progress in reinforcement learning upon the fast development of deep neural networks.
transfer learning has arisen to tackle various challenges faced by reinforcement learning.
arXiv Detail & Related papers (2020-09-16T18:38:54Z) - The Next Decade in AI: Four Steps Towards Robust Artificial Intelligence [0.0]
I propose a hybrid, knowledge-driven, reasoning-based approach to AI.
It could provide the substrate for a richer, more robust AI than is currently possible.
arXiv Detail & Related papers (2020-02-14T18:55:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.