Bio-inspired Machine Learning: programmed death and replication
- URL: http://arxiv.org/abs/2207.04886v1
- Date: Thu, 30 Jun 2022 10:44:12 GMT
- Title: Bio-inspired Machine Learning: programmed death and replication
- Authors: Andrey Grabovsky and Vitaly Vanchurin
- Abstract summary: We develop machine learning algorithms for adding neurons to the system and removing neurons from the system.
We argue that the programmed death algorithm can be used for compression of neural networks and the replication algorithm can be used for improving performance of the already trained neural networks.
The computational advantages of the bio-inspired algorithms are demonstrated by training feedforward neural networks on the MNIST dataset.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: We analyze algorithmic and computational aspects of biological phenomena,
such as replication and programmed death, in the context of machine learning.
We use two different measures of neuron efficiency to develop machine learning
algorithms for adding neurons to the system (i.e. replication algorithm) and
removing neurons from the system (i.e. programmed death algorithm). We argue
that the programmed death algorithm can be used for compression of neural
networks and the replication algorithm can be used for improving performance of
the already trained neural networks. We also show that a combined algorithm of
programmed death and replication can improve the learning efficiency of
arbitrary machine learning systems. The computational advantages of the
bio-inspired algorithms are demonstrated by training feedforward neural
networks on the MNIST dataset of handwritten images.
Related papers
- Analysis of Generalized Hebbian Learning Algorithm for Neuromorphic Hardware Using Spinnaker [0.0]
We show the application of the Generalized Hebbian Algorithm (GHA) in large-scale neuromorphic platforms, specifically SpiNNaker.
Our results demonstrate significant improvements in classification accuracy, showcasing the potential of biologically inspired learning algorithms.
arXiv Detail & Related papers (2024-11-18T13:53:10Z) - Towards Biologically Plausible Computing: A Comprehensive Comparison [24.299920289520013]
Backpropagation is a cornerstone algorithm in training neural networks for supervised learning.
The biological plausibility of backpropagation is questioned due to its requirements for weight symmetry, global error computation, and dual-phase training.
In this study, we establish criteria for biological plausibility that a desirable learning algorithm should meet.
arXiv Detail & Related papers (2024-06-23T09:51:20Z) - Mechanistic Neural Networks for Scientific Machine Learning [58.99592521721158]
We present Mechanistic Neural Networks, a neural network design for machine learning applications in the sciences.
It incorporates a new Mechanistic Block in standard architectures to explicitly learn governing differential equations as representations.
Central to our approach is a novel Relaxed Linear Programming solver (NeuRLP) inspired by a technique that reduces solving linear ODEs to solving linear programs.
arXiv Detail & Related papers (2024-02-20T15:23:24Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - The Clock and the Pizza: Two Stories in Mechanistic Explanation of
Neural Networks [59.26515696183751]
We show that algorithm discovery in neural networks is sometimes more complex.
We show that even simple learning problems can admit a surprising diversity of solutions.
arXiv Detail & Related papers (2023-06-30T17:59:13Z) - A Neural Lambda Calculus: Neurosymbolic AI meets the foundations of
computing and functional programming [0.0]
We will analyze the ability of neural networks to learn how to execute programs as a whole.
We will introduce the use of integrated neural learning and calculi formalization.
arXiv Detail & Related papers (2023-04-18T20:30:16Z) - The Predictive Forward-Forward Algorithm [79.07468367923619]
We propose the predictive forward-forward (PFF) algorithm for conducting credit assignment in neural systems.
We design a novel, dynamic recurrent neural system that learns a directed generative circuit jointly and simultaneously with a representation circuit.
PFF efficiently learns to propagate learning signals and updates synapses with forward passes only.
arXiv Detail & Related papers (2023-01-04T05:34:48Z) - Predictive Coding: Towards a Future of Deep Learning beyond
Backpropagation? [41.58529335439799]
The backpropagation of error algorithm used to train deep neural networks has been fundamental to the successes of deep learning.
Recent work has developed the idea into a general-purpose algorithm able to train neural networks using only local computations.
We show the substantially greater flexibility of predictive coding networks against equivalent deep neural networks.
arXiv Detail & Related papers (2022-02-18T22:57:03Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z) - Self learning robot using real-time neural networks [7.347989843033033]
This paper involves research, development and experimental analysis of a neural network implemented on a robot with an arm.
The neural network learns using the algorithms of Gradient Descent and Backpropagation.
Both the implementation and training of the neural network is done locally on the robot on a raspberry pi 3 so that its learning process is completely independent.
arXiv Detail & Related papers (2020-01-06T13:13:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.