A Deep Learning Framework for Lifelong Machine Learning
- URL: http://arxiv.org/abs/2105.00157v1
- Date: Sat, 1 May 2021 03:43:25 GMT
- Title: A Deep Learning Framework for Lifelong Machine Learning
- Authors: Charles X. Ling, Tanner Bohn
- Abstract summary: We propose a simple yet powerful unified deep learning framework.
Our framework supports almost all of these properties and approaches through one central mechanism.
We hope that this unified lifelong learning framework inspires new work towards large-scale experiments and understanding human learning in general.
- Score: 6.662800021628275
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Humans can learn a variety of concepts and skills incrementally over the
course of their lives while exhibiting many desirable properties, such as
continual learning without forgetting, forward transfer and backward transfer
of knowledge, and learning a new concept or task with only a few examples.
Several lines of machine learning research, such as lifelong machine learning,
few-shot learning, and transfer learning attempt to capture these properties.
However, most previous approaches can only demonstrate subsets of these
properties, often by different complex mechanisms. In this work, we propose a
simple yet powerful unified deep learning framework that supports almost all of
these properties and approaches through one central mechanism. Experiments on
toy examples support our claims. We also draw connections between many
peculiarities of human learning (such as memory loss and "rain man") and our
framework.
As academics, we often lack resources required to build and train, deep
neural networks with billions of parameters on hundreds of TPUs. Thus, while
our framework is still conceptual, and our experiment results are surely not
SOTA, we hope that this unified lifelong learning framework inspires new work
towards large-scale experiments and understanding human learning in general.
This paper is summarized in two short YouTube videos:
https://youtu.be/gCuUyGETbTU (part 1) and https://youtu.be/XsaGI01b-1o (part
2).
Related papers
- Efficient and robust multi-task learning in the brain with modular task
primitives [2.6166087473624318]
We show that a modular network endowed with task primitives allows for learning multiple tasks well while keeping parameter counts, and updates, low.
We also show that the skills acquired with our approach are more robust to a broad range of perturbations compared to those acquired with other multi-task learning strategies.
arXiv Detail & Related papers (2021-05-28T21:07:54Z) - MT-Opt: Continuous Multi-Task Robotic Reinforcement Learning at Scale [103.7609761511652]
We show how a large-scale collective robotic learning system can acquire a repertoire of behaviors simultaneously.
New tasks can be continuously instantiated from previously learned tasks.
We train and evaluate our system on a set of 12 real-world tasks with data collected from 7 robots.
arXiv Detail & Related papers (2021-04-16T16:38:02Z) - Reinforcement Learning with Videos: Combining Offline Observations with
Interaction [151.73346150068866]
Reinforcement learning is a powerful framework for robots to acquire skills from experience.
Videos of humans are a readily available source of broad and interesting experiences.
We propose a framework for reinforcement learning with videos.
arXiv Detail & Related papers (2020-11-12T17:15:48Z) - Hierarchical Affordance Discovery using Intrinsic Motivation [69.9674326582747]
We propose an algorithm using intrinsic motivation to guide the learning of affordances for a mobile robot.
This algorithm is capable to autonomously discover, learn and adapt interrelated affordances without pre-programmed actions.
Once learned, these affordances may be used by the algorithm to plan sequences of actions in order to perform tasks of various difficulties.
arXiv Detail & Related papers (2020-09-23T07:18:21Z) - An Overview of Deep Learning Architectures in Few-Shot Learning Domain [0.0]
Few-Shot Learning (also known as one-shot learning) is a sub-field of machine learning that aims to create models that can learn the desired objective with less data.
We have reviewed some of the well-known deep learning-based approaches towards few-shot learning.
arXiv Detail & Related papers (2020-08-12T06:58:45Z) - Self-supervised Knowledge Distillation for Few-shot Learning [123.10294801296926]
Few shot learning is a promising learning paradigm due to its ability to learn out of order distributions quickly with only a few samples.
We propose a simple approach to improve the representation capacity of deep neural networks for few-shot learning tasks.
Our experiments show that, even in the first stage, self-supervision can outperform current state-of-the-art methods.
arXiv Detail & Related papers (2020-06-17T11:27:00Z) - The Ingredients of Real-World Robotic Reinforcement Learning [71.92831985295163]
We discuss the elements that are needed for a robotic learning system that can continually and autonomously improve with data collected in the real world.
We propose a particular instantiation of such a system, using dexterous manipulation as our case study.
We demonstrate that our complete system can learn without any human intervention, acquiring a variety of vision-based skills with a real-world three-fingered hand.
arXiv Detail & Related papers (2020-04-27T03:36:10Z) - Dark, Beyond Deep: A Paradigm Shift to Cognitive AI with Humanlike
Common Sense [142.53911271465344]
We argue that the next generation of AI must embrace "dark" humanlike common sense for solving novel tasks.
We identify functionality, physics, intent, causality, and utility (FPICU) as the five core domains of cognitive AI with humanlike common sense.
arXiv Detail & Related papers (2020-04-20T04:07:28Z) - Learning as Reinforcement: Applying Principles of Neuroscience for More
General Reinforcement Learning Agents [1.0742675209112622]
We implement an architecture founded in principles of experimental neuroscience, by combining computationally efficient abstractions of biological algorithms.
Our approach is inspired by research on spike-timing dependent plasticity, the transition between short and long term memory, and the role of various neurotransmitters in rewarding curiosity.
The Neurons-in-a-Box architecture can learn in a wholly generalizable manner, and demonstrates an efficient way to build and apply representations without explicitly optimizing over a set of criteria or actions.
arXiv Detail & Related papers (2020-04-20T04:06:21Z) - Intelligence, physics and information -- the tradeoff between accuracy
and simplicity in machine learning [5.584060970507507]
I believe viewing intelligence in terms of many integral aspects, and a universal two-term tradeoff between task performance and complexity, provides two feasible perspectives.
In this thesis, I address several key questions in some aspects of intelligence, and study the phase transitions in the two-term tradeoff.
arXiv Detail & Related papers (2020-01-11T18:34:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.