Lifelong Machine Learning of Functionally Compositional Structures
- URL: http://arxiv.org/abs/2207.12256v1
- Date: Mon, 25 Jul 2022 15:24:25 GMT
- Title: Lifelong Machine Learning of Functionally Compositional Structures
- Authors: Jorge A. Mendez
- Abstract summary: This dissertation presents a general-purpose framework for lifelong learning of functionally compositional structures.
The framework separates the learning into two stages: learning how to combine existing components to assimilate a novel problem, and learning how to adapt the existing components to accommodate the new problem.
Supervised learning evaluations found that 1) compositional models improve lifelong learning of diverse tasks, 2) the multi-stage process permits lifelong learning of compositional knowledge, and 3) the components learned by the framework represent self-contained and reusable functions.
- Score: 7.99536002595393
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: A hallmark of human intelligence is the ability to construct self-contained
chunks of knowledge and reuse them in novel combinations for solving different
problems. Learning such compositional structures has been a challenge for
artificial systems, due to the underlying combinatorial search. To date,
research into compositional learning has largely proceeded separately from work
on lifelong or continual learning. This dissertation integrated these two lines
of work to present a general-purpose framework for lifelong learning of
functionally compositional structures. The framework separates the learning
into two stages: learning how to combine existing components to assimilate a
novel problem, and learning how to adapt the existing components to accommodate
the new problem. This separation explicitly handles the trade-off between
stability and flexibility. This dissertation instantiated the framework into
various supervised and reinforcement learning (RL) algorithms. Supervised
learning evaluations found that 1) compositional models improve lifelong
learning of diverse tasks, 2) the multi-stage process permits lifelong learning
of compositional knowledge, and 3) the components learned by the framework
represent self-contained and reusable functions. Similar RL evaluations
demonstrated that 1) algorithms under the framework accelerate the discovery of
high-performing policies, and 2) these algorithms retain or improve performance
on previously learned tasks. The dissertation extended one lifelong
compositional RL algorithm to the nonstationary setting, where the task
distribution varies over time, and found that modularity permits individually
tracking changes to different elements in the environment. The final
contribution of this dissertation was a new benchmark for compositional RL,
which exposed that existing methods struggle to discover the compositional
properties of the environment.
Related papers
- Retrieval-Enhanced Machine Learning: Synthesis and Opportunities [60.34182805429511]
Retrieval-enhancement can be extended to a broader spectrum of machine learning (ML)
This work introduces a formal framework of this paradigm, Retrieval-Enhanced Machine Learning (REML), by synthesizing the literature in various domains in ML with consistent notations which is missing from the current literature.
The goal of this work is to equip researchers across various disciplines with a comprehensive, formally structured framework of retrieval-enhanced models, thereby fostering interdisciplinary future research.
arXiv Detail & Related papers (2024-07-17T20:01:21Z) - System Design for an Integrated Lifelong Reinforcement Learning Agent
for Real-Time Strategy Games [34.3277278308442]
Continual/lifelong learning (LL) involves minimizing forgetting of old tasks while maximizing a model's capability to learn new tasks.
We introduce the Lifelong Reinforcement Learning Components Framework (L2RLCF), which standardizes L2RL systems and assimilates different continual learning components.
We describe a case study that demonstrates how multiple independently-developed LL components can be integrated into a single realized system.
arXiv Detail & Related papers (2022-12-08T23:32:57Z) - Hierarchically Structured Task-Agnostic Continual Learning [0.0]
We take a task-agnostic view of continual learning and develop a hierarchical information-theoretic optimality principle.
We propose a neural network layer, called the Mixture-of-Variational-Experts layer, that alleviates forgetting by creating a set of information processing paths.
Our approach can operate in a task-agnostic way, i.e., it does not require task-specific knowledge, as is the case with many existing continual learning algorithms.
arXiv Detail & Related papers (2022-11-14T19:53:15Z) - CompoSuite: A Compositional Reinforcement Learning Benchmark [20.89464587308586]
We present CompoSuite, an open-source benchmark for compositional multi-task reinforcement learning (RL)
Each CompoSuite task requires a particular robot arm to manipulate one individual object to achieve a task objective while avoiding an obstacle.
We benchmark existing single-task, multi-task, and compositional learning algorithms on various training settings, and assess their capability to compositionally generalize to unseen tasks.
arXiv Detail & Related papers (2022-07-08T22:01:52Z) - Modular Lifelong Reinforcement Learning via Neural Composition [31.561979764372886]
Humans commonly solve complex problems by decomposing them into easier subproblems and then combining the subproblem solutions.
This type of compositional reasoning permits reuse of the subproblem solutions when tackling future tasks that share part of the underlying compositional structure.
In a continual or lifelong reinforcement learning (RL) setting, this ability to decompose knowledge into reusable components would enable agents to quickly learn new RL tasks.
arXiv Detail & Related papers (2022-07-01T13:48:29Z) - Sample-Efficient Reinforcement Learning in the Presence of Exogenous
Information [77.19830787312743]
In real-world reinforcement learning applications the learner's observation space is ubiquitously high-dimensional with both relevant and irrelevant information about the task at hand.
We introduce a new problem setting for reinforcement learning, the Exogenous Decision Process (ExoMDP), in which the state space admits an (unknown) factorization into a small controllable component and a large irrelevant component.
We provide a new algorithm, ExoRL, which learns a near-optimal policy with sample complexity in the size of the endogenous component.
arXiv Detail & Related papers (2022-06-09T05:19:32Z) - Combining Modular Skills in Multitask Learning [149.8001096811708]
A modular design encourages neural models to disentangle and recombine different facets of knowledge to generalise more systematically to new tasks.
In this work, we assume each task is associated with a subset of latent discrete skills from a (potentially small) inventory.
We find that the modular design of a network significantly increases sample efficiency in reinforcement learning and few-shot generalisation in supervised learning.
arXiv Detail & Related papers (2022-02-28T16:07:19Z) - Spatio-Temporal Representation Factorization for Video-based Person
Re-Identification [55.01276167336187]
We propose Spatio-Temporal Representation Factorization module (STRF) for re-ID.
STRF is a flexible new computational unit that can be used in conjunction with most existing 3D convolutional neural network architectures for re-ID.
We empirically show that STRF improves performance of various existing baseline architectures while demonstrating new state-of-the-art results.
arXiv Detail & Related papers (2021-07-25T19:29:37Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Lifelong Learning of Compositional Structures [26.524289609910653]
We present a general-purpose framework for lifelong learning of compositional structures.
Our framework separates the learning process into two broad stages: learning how to best combine existing components in order to assimilate a novel problem, and learning how to adapt the set of existing components to accommodate the new problem.
arXiv Detail & Related papers (2020-07-15T14:58:48Z) - Adversarial Continual Learning [99.56738010842301]
We propose a hybrid continual learning framework that learns a disjoint representation for task-invariant and task-specific features.
Our model combines architecture growth to prevent forgetting of task-specific skills and an experience replay approach to preserve shared skills.
arXiv Detail & Related papers (2020-03-21T02:08:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.