When, where, and how to add new neurons to ANNs
- URL: http://arxiv.org/abs/2202.08539v1
- Date: Thu, 17 Feb 2022 09:32:08 GMT
- Title: When, where, and how to add new neurons to ANNs
- Authors: Kaitlin Maile, Emmanuel Rachelson, Herv\'e Luga, Dennis G. Wilson
- Abstract summary: Neurogenesis in ANNs is an understudied and difficult problem, even compared to other forms of structural learning like pruning.
We introduce a framework for studying the various facets of neurogenesis: when, where, and how to add neurons during the learning process.
- Score: 3.0969191504482243
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neurogenesis in ANNs is an understudied and difficult problem, even compared
to other forms of structural learning like pruning. By decomposing it into
triggers and initializations, we introduce a framework for studying the various
facets of neurogenesis: when, where, and how to add neurons during the learning
process. We present the Neural Orthogonality (NORTH*) suite of neurogenesis
strategies, combining layer-wise triggers and initializations based on the
orthogonality of activations or weights to dynamically grow performant networks
that converge to an efficient size. We evaluate our contributions against other
recent neurogenesis works with MLPs.
Related papers
- Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Astrocyte-Enabled Advancements in Spiking Neural Networks for Large
Language Modeling [7.863029550014263]
Astrocyte-Modulated Spiking Neural Network (AstroSNN) exhibits exceptional performance in tasks involving memory retention and natural language generation.
AstroSNN shows low latency, high throughput, and reduced memory usage in practical applications.
arXiv Detail & Related papers (2023-12-12T06:56:31Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Low Tensor Rank Learning of Neural Dynamics [0.0]
We show that low-tensor-rank weights emerge naturally in RNNs trained to solve low-dimensional tasks.
Our findings provide insight on the evolution of population connectivity over learning in both biological and artificial neural networks.
arXiv Detail & Related papers (2023-08-22T17:08:47Z) - Learning to Act through Evolution of Neural Diversity in Random Neural
Networks [9.387749254963595]
In most artificial neural networks (ANNs), neural computation is abstracted to an activation function that is usually shared between all neurons.
We propose the optimization of neuro-centric parameters to attain a set of diverse neurons that can perform complex computations.
arXiv Detail & Related papers (2023-05-25T11:33:04Z) - Dive into the Power of Neuronal Heterogeneity [8.6837371869842]
We show the challenges faced by backpropagation-based methods in optimizing Spiking Neural Networks (SNNs) and achieve more robust optimization of heterogeneous neurons in random networks using an Evolutionary Strategy (ES)
We find that membrane time constants play a crucial role in neural heterogeneity, and their distribution is similar to that observed in biological experiments.
arXiv Detail & Related papers (2023-05-19T07:32:29Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Modeling Associative Plasticity between Synapses to Enhance Learning of
Spiking Neural Networks [4.736525128377909]
Spiking Neural Networks (SNNs) are the third generation of artificial neural networks that enable energy-efficient implementation on neuromorphic hardware.
We propose a robust and effective learning mechanism by modeling the associative plasticity between synapses.
Our approaches achieve superior performance on static and state-of-the-art neuromorphic datasets.
arXiv Detail & Related papers (2022-07-24T06:12:23Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Non-linear Neurons with Human-like Apical Dendrite Activations [81.18416067005538]
We show that a standard neuron followed by our novel apical dendrite activation (ADA) can learn the XOR logical function with 100% accuracy.
We conduct experiments on six benchmark data sets from computer vision, signal processing and natural language processing.
arXiv Detail & Related papers (2020-02-02T21:09:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.