Evolutionary algorithms as an alternative to backpropagation for
supervised training of Biophysical Neural Networks and Neural ODEs
- URL: http://arxiv.org/abs/2311.10869v2
- Date: Tue, 21 Nov 2023 02:49:07 GMT
- Title: Evolutionary algorithms as an alternative to backpropagation for
supervised training of Biophysical Neural Networks and Neural ODEs
- Authors: James Hazelden, Yuhan Helena Liu, Eli Shlizerman, Eric Shea-Brown
- Abstract summary: We investigate the use of "gradient-estimating" evolutionary algorithms for training biophysically based neural networks.
We find that EAs have several advantages making them desirable over direct BP.
Our findings suggest that biophysical neurons could provide useful benchmarks for testing the limits of BP methods.
- Score: 12.357635939839696
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Training networks consisting of biophysically accurate neuron models could
allow for new insights into how brain circuits can organize and solve tasks. We
begin by analyzing the extent to which the central algorithm for neural network
learning -- stochastic gradient descent through backpropagation (BP) -- can be
used to train such networks. We find that properties of biophysically based
neural network models needed for accurate modelling such as stiffness, high
nonlinearity and long evaluation timeframes relative to spike times makes BP
unstable and divergent in a variety of cases. To address these instabilities
and inspired by recent work, we investigate the use of "gradient-estimating"
evolutionary algorithms (EAs) for training biophysically based neural networks.
We find that EAs have several advantages making them desirable over direct BP,
including being forward-pass only, robust to noisy and rigid losses, allowing
for discrete loss formulations, and potentially facilitating a more global
exploration of parameters. We apply our method to train a recurrent network of
Morris-Lecar neuron models on a stimulus integration and working memory task,
and show how it can succeed in cases where direct BP is inapplicable. To expand
on the viability of EAs in general, we apply them to a general neural ODE
problem and a stiff neural ODE benchmark and find again that EAs can
out-perform direct BP here, especially for the over-parameterized regime. Our
findings suggest that biophysical neurons could provide useful benchmarks for
testing the limits of BP-adjacent methods, and demonstrate the viability of EAs
for training networks with complex components.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Spatio-temporal Structure of Excitation and Inhibition Emerges in Spiking Neural Networks with and without Biologically Plausible Constraints [0.06752396542927405]
We present a Spiking Neural Network (SNN) model that incorporates learnable synaptic delays.
We implement a dynamic pruning strategy that combines DEEP R for connection removal and RigL for connection.
We observed that the reintroduction-temporal patterns of excitation and inhibition appeared in the more biologically plausible model as well.
arXiv Detail & Related papers (2024-07-07T11:55:48Z) - Biologically-Plausible Topology Improved Spiking Actor Network for Efficient Deep Reinforcement Learning [15.143466733327566]
Recent advances in neuroscience have unveiled that the human brain achieves efficient reward-based learning.
The success of Deep Reinforcement Learning (DRL) is largely attributed to utilizing Artificial Neural Networks (ANNs) as function approximators.
We propose a novel alternative for function approximator, the Biologically-Plausible Topology improved Spiking Actor Network (BPT-SAN)
arXiv Detail & Related papers (2024-03-29T13:25:19Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - SPICEprop: Backpropagating Errors Through Memristive Spiking Neural
Networks [2.8971214387667494]
We present a fully memristive spiking neural network (MSNN) consisting of novel memristive neurons trained using the backpropagation through time (BPTT) learning rule.
Gradient descent is applied directly to the memristive integrated-and-fire (MIF) neuron designed using analog SPICE circuit models.
We achieve 97.58% accuracy on the MNIST testing dataset and 75.26% on the Fashion-MNIST testing dataset, the highest accuracies among all fully MSNNs.
arXiv Detail & Related papers (2022-03-02T21:34:43Z) - Dynamic Neural Diversification: Path to Computationally Sustainable
Neural Networks [68.8204255655161]
Small neural networks with a constrained number of trainable parameters, can be suitable resource-efficient candidates for many simple tasks.
We explore the diversity of the neurons within the hidden layer during the learning process.
We analyze how the diversity of the neurons affects predictions of the model.
arXiv Detail & Related papers (2021-09-20T15:12:16Z) - Towards Evaluating and Training Verifiably Robust Neural Networks [81.39994285743555]
We study the relationship between IBP and CROWN, and prove that CROWN is always tighter than IBP when choosing appropriate bounding lines.
We propose a relaxed version of CROWN, linear bound propagation (LBP), that can be used to verify large networks to obtain lower verified errors.
arXiv Detail & Related papers (2021-04-01T13:03:48Z) - Predictive Coding Can Do Exact Backpropagation on Any Neural Network [40.51949948934705]
We generalize (IL and) Z-IL by directly defining them on computational graphs.
This is the first biologically plausible algorithm that is shown to be equivalent to BP in the way of updating parameters on any neural network.
arXiv Detail & Related papers (2021-03-08T11:52:51Z) - Rectified Linear Postsynaptic Potential Function for Backpropagation in
Deep Spiking Neural Networks [55.0627904986664]
Spiking Neural Networks (SNNs) usetemporal spike patterns to represent and transmit information, which is not only biologically realistic but also suitable for ultra-low-power event-driven neuromorphic implementation.
This paper investigates the contribution of spike timing dynamics to information encoding, synaptic plasticity and decision making, providing a new perspective to design of future DeepSNNs and neuromorphic hardware systems.
arXiv Detail & Related papers (2020-03-26T11:13:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.