Accretionary Learning with Deep Neural Networks
- URL: http://arxiv.org/abs/2111.10857v1
- Date: Sun, 21 Nov 2021 16:58:15 GMT
- Title: Accretionary Learning with Deep Neural Networks
- Authors: Xinyu Wei, Biing-Hwang Fred Juang, Ouya Wang, Shenglong Zhou and
Geoffrey Ye Li
- Abstract summary: We propose a new learning method named Accretionary Learning (AL) to emulate human learning.
The corresponding learning structure is modularized, which can dynamically expand to register and use new knowledge.
We show that the new structure and the design methodology lead to a system that can grow to cope with increased cognitive complexity.
- Score: 36.65914244537912
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the fundamental limitations of Deep Neural Networks (DNN) is its
inability to acquire and accumulate new cognitive capabilities. When some new
data appears, such as new object classes that are not in the prescribed set of
objects being recognized, a conventional DNN would not be able to recognize
them due to the fundamental formulation that it takes. The current solution is
typically to re-design and re-learn the entire network, perhaps with a new
configuration, from a newly expanded dataset to accommodate new knowledge. This
process is quite different from that of a human learner. In this paper, we
propose a new learning method named Accretionary Learning (AL) to emulate human
learning, in that the set of objects to be recognized may not be pre-specified.
The corresponding learning structure is modularized, which can dynamically
expand to register and use new knowledge. During accretionary learning, the
learning process does not require the system to be totally re-designed and
re-trained as the set of objects grows in size. The proposed DNN structure does
not forget previous knowledge when learning to recognize new data classes. We
show that the new structure and the design methodology lead to a system that
can grow to cope with increased cognitive complexity while providing stable and
superior overall performance.
Related papers
- Continual Learning with Deep Learning Methods in an Application-Oriented
Context [0.0]
An important research area of Artificial Intelligence (AI) deals with the automatic derivation of knowledge from data.
One type of machine learning algorithms that can be categorized as "deep learning" model is referred to as Deep Neural Networks (DNNs)
DNNs are affected by a problem that prevents new knowledge from being added to an existing base.
arXiv Detail & Related papers (2022-07-12T10:13:33Z) - Increasing Depth of Neural Networks for Life-long Learning [2.0305676256390934]
We propose a novel method for continual learning based on the increasing depth of neural networks.
This work explores whether extending neural network depth may be beneficial in a life-long learning setting.
arXiv Detail & Related papers (2022-02-22T11:21:41Z) - The Role of Bio-Inspired Modularity in General Learning [0.0]
One goal of general intelligence is to learn novel information without overwriting prior learning.
bootstrapping previous knowledge may allow for faster learning of a novel task.
modularity may offer a solution to weight-update learning methods that adheres to the learning without catastrophic forgetting and bootstrapping constraints.
arXiv Detail & Related papers (2021-09-23T18:45:34Z) - Incremental Deep Neural Network Learning using Classification Confidence
Thresholding [4.061135251278187]
Most modern neural networks for classification fail to take into account the concept of the unknown.
This paper proposes the Classification Confidence Threshold approach to prime neural networks for incremental learning.
arXiv Detail & Related papers (2021-06-21T22:46:28Z) - Towards a Universal Continuous Knowledge Base [49.95342223987143]
We propose a method for building a continuous knowledge base that can store knowledge imported from multiple neural networks.
Experiments on text classification show promising results.
We import the knowledge from multiple models to the knowledge base, from which the fused knowledge is exported back to a single model.
arXiv Detail & Related papers (2020-12-25T12:27:44Z) - Neural Networks Enhancement with Logical Knowledge [83.9217787335878]
We propose an extension of KENN for relational data.
The results show that KENN is capable of increasing the performances of the underlying neural network even in the presence relational data.
arXiv Detail & Related papers (2020-09-13T21:12:20Z) - Incremental Training of a Recurrent Neural Network Exploiting a
Multi-Scale Dynamic Memory [79.42778415729475]
We propose a novel incrementally trained recurrent architecture targeting explicitly multi-scale learning.
We show how to extend the architecture of a simple RNN by separating its hidden state into different modules.
We discuss a training algorithm where new modules are iteratively added to the model to learn progressively longer dependencies.
arXiv Detail & Related papers (2020-06-29T08:35:49Z) - Neuroevolutionary Transfer Learning of Deep Recurrent Neural Networks
through Network-Aware Adaptation [57.46377517266827]
This work introduces network-aware adaptive structure transfer learning (N-ASTL)
N-ASTL utilizes statistical information related to the source network's topology and weight distribution to inform how new input and output neurons are to be integrated into the existing structure.
Results show improvements over prior state-of-the-art, including the ability to transfer in challenging real-world datasets not previously possible.
arXiv Detail & Related papers (2020-06-04T06:07:30Z) - Few-Shot Class-Incremental Learning [68.75462849428196]
We focus on a challenging but practical few-shot class-incremental learning (FSCIL) problem.
FSCIL requires CNN models to incrementally learn new classes from very few labelled samples, without forgetting the previously learned ones.
We represent the knowledge using a neural gas (NG) network, which can learn and preserve the topology of the feature manifold formed by different classes.
arXiv Detail & Related papers (2020-04-23T03:38:33Z) - Deep Adaptive Semantic Logic (DASL): Compiling Declarative Knowledge
into Deep Neural Networks [11.622060073764944]
We introduce Deep Adaptive Semantic Logic (DASL), a novel framework for automating the generation of deep neural networks.
DASL incorporates user-provided formal knowledge to improve learning from data.
We evaluate DASL on a visual relationship detection task and demonstrate that the addition of commonsense knowledge improves performance by $10.7%$ in a data scarce setting.
arXiv Detail & Related papers (2020-03-16T17:37:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.