A Study on Efficiency in Continual Learning Inspired by Human Learning
- URL: http://arxiv.org/abs/2010.15187v1
- Date: Wed, 28 Oct 2020 19:11:01 GMT
- Title: A Study on Efficiency in Continual Learning Inspired by Human Learning
- Authors: Philip J. Ball, Yingzhen Li, Angus Lamb, Cheng Zhang
- Abstract summary: We study the efficiency of continual learning systems, taking inspiration from human learning.
In particular, inspired by the mechanisms of sleep, we evaluate popular pruning-based continual learning algorithms.
- Score: 24.78879710005838
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans are efficient continual learning systems; we continually learn new
skills from birth with finite cells and resources. Our learning is highly
optimized both in terms of capacity and time while not suffering from
catastrophic forgetting. In this work we study the efficiency of continual
learning systems, taking inspiration from human learning. In particular,
inspired by the mechanisms of sleep, we evaluate popular pruning-based
continual learning algorithms, using PackNet as a case study. First, we
identify that weight freezing, which is used in continual learning without
biological justification, can result in over $2\times$ as many weights being
used for a given level of performance. Secondly, we note the similarity in
human day and night time behaviors to the training and pruning phases
respectively of PackNet. We study a setting where the pruning phase is given a
time budget, and identify connections between iterative pruning and multiple
sleep cycles in humans. We show there exists an optimal choice of iteration
v.s. epochs given different tasks.
Related papers
- SPIRE: Synergistic Planning, Imitation, and Reinforcement Learning for Long-Horizon Manipulation [58.14969377419633]
We propose spire, a system that decomposes tasks into smaller learning subproblems and second combines imitation and reinforcement learning to maximize their strengths.
We find that spire outperforms prior approaches that integrate imitation learning, reinforcement learning, and planning by 35% to 50% in average task performance.
arXiv Detail & Related papers (2024-10-23T17:42:07Z) - Continual Learning of Numerous Tasks from Long-tail Distributions [17.706669222987273]
Continual learning focuses on developing models that learn and adapt to new tasks while retaining previously acquired knowledge.
Existing continual learning algorithms usually involve a small number of tasks with uniform sizes and may not accurately represent real-world learning scenarios.
We propose a method that reuses the states in Adam by maintaining a weighted average of the second moments from previous tasks.
We demonstrate that our method, compatible with most existing continual learning algorithms, effectively reduces forgetting with only a small amount of additional computational or memory costs.
arXiv Detail & Related papers (2024-04-03T13:56:33Z) - How Efficient Are Today's Continual Learning Algorithms? [31.120016345185217]
Supervised continual learning involves updating a deep neural network (DNN) from an ever-growing stream of labeled data.
One of the major motivations behind continual learning is being able to efficiently update a network with new information, rather than retraining from scratch on the training dataset as it grows over time.
Here, we study recent methods for incremental class learning and illustrate that many are highly inefficient in terms of compute, memory, and storage.
arXiv Detail & Related papers (2023-03-29T18:52:10Z) - Classifying Human Activities using Machine Learning and Deep Learning
Techniques [0.0]
Human Activity Recognition (HAR) describes the machines ability to recognize human actions.
Challenge in HAR is to overcome the difficulties of separating human activities based on the given data.
Deep Learning techniques like Long Short-Term Memory (LSTM), Bi-Directional LS classifier, Recurrent Neural Network (RNN), and Gated Recurrent Unit (GRU) are trained.
Experiment results proved that the Linear Support Vector in machine learning and Gated Recurrent Unit in Deep Learning provided better accuracy for human activity recognition.
arXiv Detail & Related papers (2022-05-19T05:20:04Z) - Memory Bounds for Continual Learning [13.734474418577188]
Continual learning, or lifelong learning, is a formidable current challenge to machine learning.
We make novel uses of communication complexity to establish that any continual learner, even an improper one, needs memory that grows linearly with $k$.
arXiv Detail & Related papers (2022-04-22T17:19:50Z) - Online Continual Learning with Natural Distribution Shifts: An Empirical
Study with Visual Data [101.6195176510611]
"Online" continual learning enables evaluating both information retention and online learning efficacy.
In online continual learning, each incoming small batch of data is first used for testing and then added to the training set, making the problem truly online.
We introduce a new benchmark for online continual visual learning that exhibits large scale and natural distribution shifts.
arXiv Detail & Related papers (2021-08-20T06:17:20Z) - Large-scale randomized experiment reveals machine learning helps people
learn and remember more effectively [14.305363222114543]
We focus on unveiling the potential of machine learning to improve how people learn and remember factual material.
We perform a large-scale randomized controlled trial with thousands of learners from a popular learning app in the area of mobility.
arXiv Detail & Related papers (2020-10-09T08:30:15Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z) - Bilevel Continual Learning [76.50127663309604]
We present a novel framework of continual learning named "Bilevel Continual Learning" (BCL)
Our experiments on continual learning benchmarks demonstrate the efficacy of the proposed BCL compared to many state-of-the-art methods.
arXiv Detail & Related papers (2020-07-30T16:00:23Z) - Drinking from a Firehose: Continual Learning with Web-scale Natural
Language [109.80198763438248]
We study a natural setting for continual learning on a massive scale.
We collect massive datasets of Twitter posts.
We present a rigorous evaluation of continual learning algorithms on an unprecedented scale.
arXiv Detail & Related papers (2020-07-18T05:40:02Z) - Self-supervised Knowledge Distillation for Few-shot Learning [123.10294801296926]
Few shot learning is a promising learning paradigm due to its ability to learn out of order distributions quickly with only a few samples.
We propose a simple approach to improve the representation capacity of deep neural networks for few-shot learning tasks.
Our experiments show that, even in the first stage, self-supervision can outperform current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T11:27:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.