Representation Ensembling for Synergistic Lifelong Learning with
Quasilinear Complexity
- URL: http://arxiv.org/abs/2004.12908v15
- Date: Sat, 18 Sep 2021 15:04:04 GMT
- Title: Representation Ensembling for Synergistic Lifelong Learning with
Quasilinear Complexity
- Authors: Joshua T. Vogelstein, Jayanta Dey, Hayden S. Helm, Will LeVine, Ronak
D. Mehta, Tyler M. Tomita, Haoyin Xu, Ali Geisa, Qingyang Wang, Gido M. van
de Ven, Chenyu Gao, Weiwei Yang, Bryan Tower, Jonathan Larson, Christopher M.
White, and Carey E. Priebe
- Abstract summary: In lifelong learning, data are used to improve performance not only on the current task, but also on previously encountered, and as yet unencountered tasks.
Our key insight is that we can synergistically ensemble representations -- that were learned independently on disparate tasks -- to enable both forward and backward transfer.
- Score: 17.858926093389737
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In lifelong learning, data are used to improve performance not only on the
current task, but also on previously encountered, and as yet unencountered
tasks. In contrast, classical machine learning, which we define as, starts from
a blank slate, or tabula rasa and uses data only for the single task at hand.
While typical transfer learning algorithms can improve performance on future
tasks, their performance on prior tasks degrades upon learning new tasks
(called forgetting). Many recent approaches for continual or lifelong learning
have attempted to maintain performance on old tasks given new tasks. But
striving to avoid forgetting sets the goal unnecessarily low. The goal of
lifelong learning should be not only to improve performance on future tasks
(forward transfer) but also on past tasks (backward transfer) with any new
data. Our key insight is that we can synergistically ensemble representations
-- that were learned independently on disparate tasks -- to enable both forward
and backward transfer. This generalizes ensembling decisions (like in decision
forests) and complements ensembling dependently learned representations (like
in multitask learning). Moreover, we can ensemble representations in
quasilinear space and time. We demonstrate this insight with two algorithms:
representation ensembles of (1) trees and (2) networks. Both algorithms
demonstrate forward and backward transfer in a variety of simulated and
benchmark data scenarios, including tabular, image, and spoken, and adversarial
tasks. This is in stark contrast to the reference algorithms we compared to,
most of which failed to transfer either forward or backward, or both, despite
that many of them require quadratic space or time complexity.
Related papers
- Continual Learning of Numerous Tasks from Long-tail Distributions [17.706669222987273]
Continual learning focuses on developing models that learn and adapt to new tasks while retaining previously acquired knowledge.
Existing continual learning algorithms usually involve a small number of tasks with uniform sizes and may not accurately represent real-world learning scenarios.
We propose a method that reuses the states in Adam by maintaining a weighted average of the second moments from previous tasks.
We demonstrate that our method, compatible with most existing continual learning algorithms, effectively reduces forgetting with only a small amount of additional computational or memory costs.
arXiv Detail & Related papers (2024-04-03T13:56:33Z) - Look-Ahead Selective Plasticity for Continual Learning of Visual Tasks [9.82510084910641]
We propose a new mechanism that takes place during task boundaries, i.e., when one task finishes and another starts.
We evaluate the proposed methods on benchmark computer vision datasets including CIFAR10 and TinyImagenet.
arXiv Detail & Related papers (2023-11-02T22:00:23Z) - Multitask Learning with No Regret: from Improved Confidence Bounds to
Active Learning [79.07658065326592]
Quantifying uncertainty in the estimated tasks is of pivotal importance for many downstream applications, such as online or active learning.
We provide novel multitask confidence intervals in the challenging setting when neither the similarity between tasks nor the tasks' features are available to the learner.
We propose a novel online learning algorithm that achieves such improved regret without knowing this parameter in advance.
arXiv Detail & Related papers (2023-08-03T13:08:09Z) - Self-paced Weight Consolidation for Continual Learning [39.27729549041708]
Continual learning algorithms are popular in preventing catastrophic forgetting in sequential task learning settings.
We propose a self-paced Weight Consolidation (spWC) framework to attain continual learning.
arXiv Detail & Related papers (2023-07-20T13:07:41Z) - Multi-Task Self-Supervised Time-Series Representation Learning [3.31490164885582]
Time-series representation learning can extract representations from data with temporal dynamics and sparse labels.
We propose a new time-series representation learning method by combining the advantages of self-supervised tasks.
We evaluate the proposed framework on three downstream tasks: time-series classification, forecasting, and anomaly detection.
arXiv Detail & Related papers (2023-03-02T07:44:06Z) - Task Compass: Scaling Multi-task Pre-training with Task Prefix [122.49242976184617]
Existing studies show that multi-task learning with large-scale supervised tasks suffers from negative effects across tasks.
We propose a task prefix guided multi-task pre-training framework to explore the relationships among tasks.
Our model can not only serve as the strong foundation backbone for a wide range of tasks but also be feasible as a probing tool for analyzing task relationships.
arXiv Detail & Related papers (2022-10-12T15:02:04Z) - Learning Invariant Representation for Continual Learning [5.979373021392084]
A key challenge in Continual learning is catastrophically forgetting previously learned tasks when the agent faces a new one.
We propose a new pseudo-rehearsal-based method, named learning Invariant Representation for Continual Learning (IRCL)
Disentangling the shared invariant representation helps to learn continually a sequence of tasks, while being more robust to forgetting and having better knowledge transfer.
arXiv Detail & Related papers (2021-01-15T15:12:51Z) - Parrot: Data-Driven Behavioral Priors for Reinforcement Learning [79.32403825036792]
We propose a method for pre-training behavioral priors that can capture complex input-output relationships observed in successful trials.
We show how this learned prior can be used for rapidly learning new tasks without impeding the RL agent's ability to try out novel behaviors.
arXiv Detail & Related papers (2020-11-19T18:47:40Z) - COG: Connecting New Skills to Past Experience with Offline Reinforcement
Learning [78.13740204156858]
We show that we can reuse prior data to extend new skills simply through dynamic programming.
We demonstrate the effectiveness of our approach by chaining together several behaviors seen in prior datasets for solving a new task.
We train our policies in an end-to-end fashion, mapping high-dimensional image observations to low-level robot control commands.
arXiv Detail & Related papers (2020-10-27T17:57:29Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Continual Learning in Low-rank Orthogonal Subspaces [86.36417214618575]
In continual learning (CL), a learner is faced with a sequence of tasks, arriving one after the other, and the goal is to remember all the tasks once the learning experience is finished.
The prior art in CL uses episodic memory, parameter regularization or network structures to reduce interference among tasks, but in the end, all the approaches learn different tasks in a joint vector space.
We propose to learn tasks in different (low-rank) vector subspaces that are kept orthogonal to each other in order to minimize interference.
arXiv Detail & Related papers (2020-10-22T12:07:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.