The Role of Bio-Inspired Modularity in General Learning
- URL: http://arxiv.org/abs/2109.15097v1
- Date: Thu, 23 Sep 2021 18:45:34 GMT
- Title: The Role of Bio-Inspired Modularity in General Learning
- Authors: Rachel A. StClair, William Edward Hahn, and Elan Barenholtz
- Abstract summary: One goal of general intelligence is to learn novel information without overwriting prior learning.
bootstrapping previous knowledge may allow for faster learning of a novel task.
modularity may offer a solution to weight-update learning methods that adheres to the learning without catastrophic forgetting and bootstrapping constraints.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One goal of general intelligence is to learn novel information without
overwriting prior learning. The utility of learning without forgetting (CF) is
twofold: first, the system can return to previously learned tasks after
learning something new. In addition, bootstrapping previous knowledge may allow
for faster learning of a novel task. Previous approaches to CF and
bootstrapping are primarily based on modifying learning in the form of changing
weights to tune the model to the current task, overwriting previously tuned
weights from previous tasks.However, another critical factor that has been
largely overlooked is the initial network topology, or architecture. Here, we
argue that the topology of biological brains likely evolved certain features
that are designed to achieve this kind of informational conservation. In
particular, we consider that the highly conserved property of modularity may
offer a solution to weight-update learning methods that adheres to the learning
without catastrophic forgetting and bootstrapping constraints. Final
considerations are then made on how to combine these two learning objectives in
a dynamical, general learning system.
Related papers
- Catastrophic Forgetting in Deep Learning: A Comprehensive Taxonomy [0.2796197251957244]
Catastrophic Forgetting (CF) can lead to a significant loss of accuracy in Deep Learning models.
CF was first observed by McCloskey and Cohen in 1989 and remains an active research topic.
This article surveys recent studies that tackle CF in modern Deep Learning models that use gradient descent as their learning algorithm.
arXiv Detail & Related papers (2023-12-16T22:24:54Z) - IF2Net: Innately Forgetting-Free Networks for Continual Learning [49.57495829364827]
Continual learning can incrementally absorb new concepts without interfering with previously learned knowledge.
Motivated by the characteristics of neural networks, we investigated how to design an Innately Forgetting-Free Network (IF2Net)
IF2Net allows a single network to inherently learn unlimited mapping rules without telling task identities at test time.
arXiv Detail & Related papers (2023-06-18T05:26:49Z) - Leveraging Old Knowledge to Continually Learn New Classes in Medical
Images [16.730335437094592]
We focus on how old knowledge can be leveraged to learn new classes without catastrophic forgetting.
Our solution is able to achieve superior performance over state-of-the-art baselines in terms of class accuracy and forgetting.
arXiv Detail & Related papers (2023-03-24T02:10:53Z) - Adaptively Integrated Knowledge Distillation and Prediction Uncertainty
for Continual Learning [71.43841235954453]
Current deep learning models often suffer from catastrophic forgetting of old knowledge when continually learning new knowledge.
Existing strategies to alleviate this issue often fix the trade-off between keeping old knowledge (stability) and learning new knowledge (plasticity)
arXiv Detail & Related papers (2023-01-18T05:36:06Z) - Hierarchically Structured Task-Agnostic Continual Learning [0.0]
We take a task-agnostic view of continual learning and develop a hierarchical information-theoretic optimality principle.
We propose a neural network layer, called the Mixture-of-Variational-Experts layer, that alleviates forgetting by creating a set of information processing paths.
Our approach can operate in a task-agnostic way, i.e., it does not require task-specific knowledge, as is the case with many existing continual learning algorithms.
arXiv Detail & Related papers (2022-11-14T19:53:15Z) - Anti-Retroactive Interference for Lifelong Learning [65.50683752919089]
We design a paradigm for lifelong learning based on meta-learning and associative mechanism of the brain.
It tackles the problem from two aspects: extracting knowledge and memorizing knowledge.
It is theoretically analyzed that the proposed learning paradigm can make the models of different tasks converge to the same optimum.
arXiv Detail & Related papers (2022-08-27T09:27:36Z) - Pre-Train Your Loss: Easy Bayesian Transfer Learning with Informative
Priors [59.93972277761501]
We show that we can learn highly informative posteriors from the source task, through supervised or self-supervised approaches.
This simple modular approach enables significant performance gains and more data-efficient learning on a variety of downstream classification and segmentation tasks.
arXiv Detail & Related papers (2022-05-20T16:19:30Z) - Accretionary Learning with Deep Neural Networks [36.65914244537912]
We propose a new learning method named Accretionary Learning (AL) to emulate human learning.
The corresponding learning structure is modularized, which can dynamically expand to register and use new knowledge.
We show that the new structure and the design methodology lead to a system that can grow to cope with increased cognitive complexity.
arXiv Detail & Related papers (2021-11-21T16:58:15Z) - Preserving Earlier Knowledge in Continual Learning with the Help of All
Previous Feature Extractors [63.21036904487014]
Continual learning of new knowledge over time is one desirable capability for intelligent systems to recognize more and more classes of objects.
We propose a simple yet effective fusion mechanism by including all the previously learned feature extractors into the intelligent model.
Experiments on multiple classification tasks show that the proposed approach can effectively reduce the forgetting of old knowledge, achieving state-of-the-art continual learning performance.
arXiv Detail & Related papers (2021-04-28T07:49:24Z) - Automated Relational Meta-learning [95.02216511235191]
We propose an automated relational meta-learning framework that automatically extracts the cross-task relations and constructs the meta-knowledge graph.
We conduct extensive experiments on 2D toy regression and few-shot image classification and the results demonstrate the superiority of ARML over state-of-the-art baselines.
arXiv Detail & Related papers (2020-01-03T07:02:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.