Lifelong Learning of Compositional Structures
- URL: http://arxiv.org/abs/2007.07732v2
- Date: Wed, 17 Mar 2021 12:12:16 GMT
- Title: Lifelong Learning of Compositional Structures
- Authors: Jorge A. Mendez and Eric Eaton
- Abstract summary: We present a general-purpose framework for lifelong learning of compositional structures.
Our framework separates the learning process into two broad stages: learning how to best combine existing components in order to assimilate a novel problem, and learning how to adapt the set of existing components to accommodate the new problem.
- Score: 26.524289609910653
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: A hallmark of human intelligence is the ability to construct self-contained
chunks of knowledge and adequately reuse them in novel combinations for solving
different yet structurally related problems. Learning such compositional
structures has been a significant challenge for artificial systems, due to the
combinatorial nature of the underlying search problem. To date, research into
compositional learning has largely proceeded separately from work on lifelong
or continual learning. We integrate these two lines of work to present a
general-purpose framework for lifelong learning of compositional structures
that can be used for solving a stream of related tasks. Our framework separates
the learning process into two broad stages: learning how to best combine
existing components in order to assimilate a novel problem, and learning how to
adapt the set of existing components to accommodate the new problem. This
separation explicitly handles the trade-off between the stability required to
remember how to solve earlier tasks and the flexibility required to solve new
tasks, as we show empirically in an extensive evaluation.
Related papers
- Reduce, Reuse, Recycle: Categories for Compositional Reinforcement Learning [19.821117942806474]
We view task composition through the prism of category theory.
The categorical properties of Markov decision processes untangle complex tasks into manageable sub-tasks.
Experimental results support the categorical theory of reinforcement learning.
arXiv Detail & Related papers (2024-08-23T21:23:22Z) - Recall-Oriented Continual Learning with Generative Adversarial
Meta-Model [5.710971447109951]
We propose a recall-oriented continual learning framework to address the stability-plasticity dilemma.
Inspired by the human brain's ability to separate the mechanisms responsible for stability and plasticity, our framework consists of a two-level architecture.
We show that our framework not only effectively learns new knowledge without any disruption but also achieves high stability of previous knowledge.
arXiv Detail & Related papers (2024-03-05T16:08:59Z) - Skills-in-Context Prompting: Unlocking Compositionality in Large Language Models [68.18370230899102]
We investigate how to elicit compositional generalization capabilities in large language models (LLMs)
We find that demonstrating both foundational skills and compositional examples grounded in these skills within the same prompt context is crucial.
We show that fine-tuning LLMs with SKiC-style data can elicit zero-shot weak-to-strong generalization.
arXiv Detail & Related papers (2023-08-01T05:54:12Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Lifelong Machine Learning of Functionally Compositional Structures [7.99536002595393]
This dissertation presents a general-purpose framework for lifelong learning of functionally compositional structures.
The framework separates the learning into two stages: learning how to combine existing components to assimilate a novel problem, and learning how to adapt the existing components to accommodate the new problem.
Supervised learning evaluations found that 1) compositional models improve lifelong learning of diverse tasks, 2) the multi-stage process permits lifelong learning of compositional knowledge, and 3) the components learned by the framework represent self-contained and reusable functions.
arXiv Detail & Related papers (2022-07-25T15:24:25Z) - How to Reuse and Compose Knowledge for a Lifetime of Tasks: A Survey on
Continual Learning and Functional Composition [26.524289609910653]
A major goal of artificial intelligence (AI) is to create an agent capable of acquiring a general understanding of the world.
Lifelong or continual learning addresses this setting, whereby an agent faces a continual stream of problems and must strive to capture the knowledge necessary for solving each new task it encounters.
Despite the intuitive appeal of this simple idea, the literatures on lifelong learning and compositional learning have proceeded largely separately.
arXiv Detail & Related papers (2022-07-15T19:53:20Z) - Modular Lifelong Reinforcement Learning via Neural Composition [31.561979764372886]
Humans commonly solve complex problems by decomposing them into easier subproblems and then combining the subproblem solutions.
This type of compositional reasoning permits reuse of the subproblem solutions when tackling future tasks that share part of the underlying compositional structure.
In a continual or lifelong reinforcement learning (RL) setting, this ability to decompose knowledge into reusable components would enable agents to quickly learn new RL tasks.
arXiv Detail & Related papers (2022-07-01T13:48:29Z) - HALMA: Humanlike Abstraction Learning Meets Affordance in Rapid Problem
Solving [104.79156980475686]
Humans learn compositional and causal abstraction, ie, knowledge, in response to the structure of naturalistic tasks.
We argue there shall be three levels of generalization in how an agent represents its knowledge: perceptual, conceptual, and algorithmic.
This benchmark is centered around a novel task domain, HALMA, for visual concept development and rapid problem-solving.
arXiv Detail & Related papers (2021-02-22T20:37:01Z) - Importance Weighted Policy Learning and Adaptation [89.46467771037054]
We study a complementary approach which is conceptually simple, general, modular and built on top of recent improvements in off-policy learning.
The framework is inspired by ideas from the probabilistic inference literature and combines robust off-policy learning with a behavior prior.
Our approach achieves competitive adaptation performance on hold-out tasks compared to meta reinforcement learning baselines and can scale to complex sparse-reward scenarios.
arXiv Detail & Related papers (2020-09-10T14:16:58Z) - Adversarial Continual Learning [99.56738010842301]
We propose a hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features.
Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills.
arXiv Detail & Related papers (2020-03-21T02:08:17Z) - Automated Relational Meta-learning [95.02216511235191]
We propose an automated relational meta-learning framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph.
We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.
arXiv Detail & Related papers (2020-01-03T07:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.