SPACE: Structured Compression and Sharing of Representational Space for
Continual Learning
- URL: http://arxiv.org/abs/2001.08650v4
- Date: Wed, 3 Feb 2021 06:23:33 GMT
- Title: SPACE: Structured Compression and Sharing of Representational Space for
Continual Learning
- Authors: Gobinda Saha, Isha Garg, Aayush Ankit and Kaushik Roy
- Abstract summary: incrementally learning tasks causes artificial neural networks to overwrite relevant information learned about older tasks, resulting in 'Catastrophic Forgetting'
We propose SPACE, an algorithm that enables a network to learn continually and efficiently by partitioning the learnt space into a Core space.
We evaluate our algorithm on P-MNIST, CIFAR and a sequence of 8 different datasets, and achieve comparable accuracy to the state-of-the-art methods.
- Score: 10.06017287116299
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans learn adaptively and efficiently throughout their lives. However,
incrementally learning tasks causes artificial neural networks to overwrite
relevant information learned about older tasks, resulting in 'Catastrophic
Forgetting'. Efforts to overcome this phenomenon often utilize resources
poorly, for instance, by growing the network architecture or needing to save
parametric importance scores, or violate data privacy between tasks. To tackle
this, we propose SPACE, an algorithm that enables a network to learn
continually and efficiently by partitioning the learnt space into a Core space,
that serves as the condensed knowledge base over previously learned tasks, and
a Residual space, which is akin to a scratch space for learning the current
task. After learning each task, the Residual is analyzed for redundancy, both
within itself and with the learnt Core space. A minimal number of extra
dimensions required to explain the current task are added to the Core space and
the remaining Residual is freed up for learning the next task. We evaluate our
algorithm on P-MNIST, CIFAR and a sequence of 8 different datasets, and achieve
comparable accuracy to the state-of-the-art methods while overcoming
catastrophic forgetting. Additionally, our algorithm is well suited for
practical use. The partitioning algorithm analyzes all layers in one shot,
ensuring scalability to deeper networks. Moreover, the analysis of dimensions
translates to filter-level sparsity, and the structured nature of the resulting
architecture gives us up to 5x improvement in energy efficiency during task
inference over the current state-of-the-art.
Related papers
- Continual Learning of Numerous Tasks from Long-tail Distributions [17.706669222987273]
Continual learning focuses on developing models that learn and adapt to new tasks while retaining previously acquired knowledge.
Existing continual learning algorithms usually involve a small number of tasks with uniform sizes and may not accurately represent real-world learning scenarios.
We propose a method that reuses the states in Adam by maintaining a weighted average of the second moments from previous tasks.
We demonstrate that our method, compatible with most existing continual learning algorithms, effectively reduces forgetting with only a small amount of additional computational or memory costs.
arXiv Detail & Related papers (2024-04-03T13:56:33Z) - Learning Good Features to Transfer Across Tasks and Domains [16.05821129333396]
We first show that such knowledge can be shared across tasks by learning a mapping between task-specific deep features in a given domain.
Then, we show that this mapping function, implemented by a neural network, is able to generalize to novel unseen domains.
arXiv Detail & Related papers (2023-01-26T18:49:39Z) - Fast Inference and Transfer of Compositional Task Structures for
Few-shot Task Generalization [101.72755769194677]
We formulate it as a few-shot reinforcement learning problem where a task is characterized by a subtask graph.
Our multi-task subtask graph inferencer (MTSGI) first infers the common high-level task structure in terms of the subtask graph from the training tasks.
Our experiment results on 2D grid-world and complex web navigation domains show that the proposed method can learn and leverage the common underlying structure of the tasks for faster adaptation to the unseen tasks.
arXiv Detail & Related papers (2022-05-25T10:44:25Z) - Counting with Adaptive Auxiliary Learning [23.715818463425503]
This paper proposes an adaptive auxiliary task learning based approach for object counting problems.
We develop an attention-enhanced adaptively shared backbone network to enable both task-shared and task-tailored features learning.
Our method achieves superior performance to the state-of-the-art auxiliary task learning based counting methods.
arXiv Detail & Related papers (2022-03-08T13:10:17Z) - Center Loss Regularization for Continual Learning [0.0]
In general, neural networks lack the ability to learn different tasks sequentially.
Our approach remembers old tasks by projecting the representations of new tasks close to that of old tasks.
We demonstrate that our approach is scalable, effective, and gives competitive performance compared to state-of-the-art continual learning methods.
arXiv Detail & Related papers (2021-10-21T17:46:44Z) - Learning to Relate Depth and Semantics for Unsupervised Domain
Adaptation [87.1188556802942]
We present an approach for encoding visual task relationships to improve model performance in an Unsupervised Domain Adaptation (UDA) setting.
We propose a novel Cross-Task Relation Layer (CTRL), which encodes task dependencies between the semantic and depth predictions.
Furthermore, we propose an Iterative Self-Learning (ISL) training scheme, which exploits semantic pseudo-labels to provide extra supervision on the target domain.
arXiv Detail & Related papers (2021-05-17T13:42:09Z) - Gradient Projection Memory for Continual Learning [5.43185002439223]
The ability to learn continually without forgetting the past tasks is a desired attribute for artificial learning systems.
We propose a novel approach where a neural network learns new tasks by taking gradient steps in the orthogonal direction to the gradient subspaces deemed important for the past tasks.
arXiv Detail & Related papers (2021-03-17T16:31:29Z) - Continuous Ant-Based Neural Topology Search [62.200941836913586]
This work introduces a novel, nature-inspired neural architecture search (NAS) algorithm based on ant colony optimization.
The Continuous Ant-based Neural Topology Search (CANTS) is strongly inspired by how ants move in the real world.
arXiv Detail & Related papers (2020-11-21T17:49:44Z) - Multi-task Supervised Learning via Cross-learning [102.64082402388192]
We consider a problem known as multi-task learning, consisting of fitting a set of regression functions intended for solving different tasks.
In our novel formulation, we couple the parameters of these functions, so that they learn in their task specific domains while staying close to each other.
This facilitates cross-fertilization in which data collected across different domains help improving the learning performance at each other task.
arXiv Detail & Related papers (2020-10-24T21:35:57Z) - Continual Learning in Low-rank Orthogonal Subspaces [86.36417214618575]
In continual learning (CL), a learner is faced with a sequence of tasks, arriving one after the other, and the goal is to remember all the tasks once the learning experience is finished.
The prior art in CL uses episodic memory, parameter regularization or network structures to reduce interference among tasks, but in the end, all the approaches learn different tasks in a joint vector space.
We propose to learn tasks in different (low-rank) vector subspaces that are kept orthogonal to each other in order to minimize interference.
arXiv Detail & Related papers (2020-10-22T12:07:43Z) - CATCH: Context-based Meta Reinforcement Learning for Transferrable
Architecture Search [102.67142711824748]
CATCH is a novel Context-bAsed meTa reinforcement learning algorithm for transferrable arChitecture searcH.
The combination of meta-learning and RL allows CATCH to efficiently adapt to new tasks while being agnostic to search spaces.
It is also capable of handling cross-domain architecture search as competitive networks on ImageNet, COCO, and Cityscapes are identified.
arXiv Detail & Related papers (2020-07-18T09:35:53Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.