Self-Constructing Neural Networks Through Random Mutation
- URL: http://arxiv.org/abs/2103.15692v1
- Date: Mon, 29 Mar 2021 15:27:38 GMT
- Title: Self-Constructing Neural Networks Through Random Mutation
- Authors: Samuel Schmidgall
- Abstract summary: This paper presents a simple method for learning neural architecture through random mutation.
It demonstrates 1) neural architecture may be learned during the agent's lifetime, 2) neural architecture may be constructed over a single lifetime without any initial connections or neurons, and 3) architectural modifications enable rapid adaptation to dynamic and novel task scenarios.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The search for neural architecture is producing many of the most exciting
results in artificial intelligence. It has increasingly become apparent that
task-specific neural architecture plays a crucial role for effectively solving
problems. This paper presents a simple method for learning neural architecture
through random mutation. This method demonstrates 1) neural architecture may be
learned during the agent's lifetime, 2) neural architecture may be constructed
over a single lifetime without any initial connections or neurons, and 3)
architectural modifications enable rapid adaptation to dynamic and novel task
scenarios. Starting without any neurons or connections, this method constructs
a neural architecture capable of high-performance on several tasks. The
lifelong learning capabilities of this method are demonstrated in an
environment without episodic resets, even learning with constantly changing
morphology, limb disablement, and changing task goals all without losing
locomotion capabilities.
Related papers
- Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Simple and Effective Transfer Learning for Neuro-Symbolic Integration [50.592338727912946]
A potential solution to this issue is Neuro-Symbolic Integration (NeSy), where neural approaches are combined with symbolic reasoning.
Most of these methods exploit a neural network to map perceptions to symbols and a logical reasoner to predict the output of the downstream task.
They suffer from several issues, including slow convergence, learning difficulties with complex perception tasks, and convergence to local minima.
This paper proposes a simple yet effective method to ameliorate these problems.
arXiv Detail & Related papers (2024-02-21T15:51:01Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Learning to acquire novel cognitive tasks with evolution, plasticity and
meta-meta-learning [3.8073142980733]
In meta-learning, networks are trained with external algorithms to learn tasks that require acquiring, storing and exploiting unpredictable information for each new instance of the task.
Here we evolve neural networks, endowed with plastic connections, over a sizable set of simple meta-learning tasks based on a neuroscience modelling framework.
The resulting evolved network can automatically acquire a novel simple cognitive task, never seen during training, through the spontaneous operation of its evolved neural organization and plasticity structure.
arXiv Detail & Related papers (2021-12-16T03:18:01Z) - Artificial Neural Variability for Deep Learning: On Overfitting, Noise
Memorization, and Catastrophic Forgetting [135.0863818867184]
artificial neural variability (ANV) helps artificial neural networks learn some advantages from natural'' neural networks.
ANV plays as an implicit regularizer of the mutual information between the training data and the learned model.
It can effectively relieve overfitting, label noise memorization, and catastrophic forgetting at negligible costs.
arXiv Detail & Related papers (2020-11-12T06:06:33Z) - Architecture Agnostic Neural Networks [33.803822613725984]
We create families of architecture agnostic neural networks not trained via backpropagation.
These high-performing network families share the same sparsity, distribution of binary weights, and succeed in both static and dynamic tasks.
arXiv Detail & Related papers (2020-11-05T09:04:07Z) - A multi-agent model for growing spiking neural networks [0.0]
This project has explored rules for growing the connections between the neurons in Spiking Neural Networks as a learning mechanism.
Results in a simulation environment showed that for a given set of parameters it is possible to reach topologies that reproduce the tested functions.
This project also opens the door to the usage of techniques like genetic algorithms for obtaining the best suited values for the model parameters.
arXiv Detail & Related papers (2020-09-21T15:11:29Z) - Efficient Architecture Search for Continual Learning [36.998565674813285]
Continual learning with neural networks aims to learn a sequence of tasks well.
It is often confronted with three challenges: (1) overcome the catastrophic forgetting problem, (2) adapt the current network to new tasks, and (3) control its model complexity.
We propose a novel approach named as Continual Learning with Efficient Architecture Search, or CLEAS in short.
arXiv Detail & Related papers (2020-06-07T02:59:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.