Evolving Neuronal Plasticity Rules using Cartesian Genetic Programming
- URL: http://arxiv.org/abs/2102.04312v1
- Date: Mon, 8 Feb 2021 16:17:15 GMT
- Title: Evolving Neuronal Plasticity Rules using Cartesian Genetic Programming
- Authors: Henrik D. Mettler, Maximilian Schmidt, Walter Senn, Mihai A.
Petrovici, Jakob Jordan
- Abstract summary: We employ genetic programming to evolve biologically plausible human-interpretable plasticity rules.
We demonstrate that the evolved rules perform competitively with known hand-designed solutions.
- Score: 1.1980325577555802
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We formulate the search for phenomenological models of synaptic plasticity as
an optimization problem. We employ Cartesian genetic programming to evolve
biologically plausible human-interpretable plasticity rules that allow a given
network to successfully solve tasks from specific task families. While our
evolving-to-learn approach can be applied to various learning paradigms, here
we illustrate its power by evolving plasticity rules that allow a network to
efficiently determine the first principal component of its input distribution.
We demonstrate that the evolved rules perform competitively with known
hand-designed solutions. We explore how the statistical properties of the
datasets used during the evolutionary search influences the form of the
plasticity rules and discover new rules which are adapted to the structure of
the corresponding datasets.
Related papers
- From Lazy to Rich: Exact Learning Dynamics in Deep Linear Networks [47.13391046553908]
In artificial networks, the effectiveness of these models relies on their ability to build task specific representation.
Prior studies highlight that different initializations can place networks in either a lazy regime, where representations remain static, or a rich/feature learning regime, where representations evolve dynamically.
These solutions capture the evolution of representations and the Neural Kernel across the spectrum from the rich to the lazy regimes.
arXiv Detail & Related papers (2024-09-22T23:19:04Z) - LifeGPT: Topology-Agnostic Generative Pretrained Transformer Model for Cellular Automata [0.0]
We show that a decoder-only generative pretrained transformer (GPT) model can simulate Conway's Game of Life (Life) on a toroidal grid with no prior knowledge on the size of the grid.
Our results pave the path towards true universal computation within a large language model framework.
arXiv Detail & Related papers (2024-09-03T11:43:16Z) - Network bottlenecks and task structure control the evolution of interpretable learning rules in a foraging agent [0.0]
We study meta-learning via evolutionary optimization of simple reward-modulated plasticity rules in embodied agents.
We show that unconstrained meta-learning leads to the emergence of diverse plasticity rules.
Our findings indicate that the meta-learning of plasticity rules is very sensitive to various parameters, with this sensitivity possibly reflected in the learning rules found in biological networks.
arXiv Detail & Related papers (2024-03-20T14:57:02Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Evolving-to-Learn Reinforcement Learning Tasks with Spiking Neural
Networks [0.0]
We introduce an evolutionary algorithm that evolves suitable synaptic plasticity rules for the task at hand.
We find learning rules that successfully solve an XOR and cart-pole task, and discover new learning rules that outperform the baseline rules from literature.
arXiv Detail & Related papers (2022-02-24T19:07:23Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - A Spiking Neuron Synaptic Plasticity Model Optimized for Unsupervised
Learning [0.0]
Spiking neural networks (SNN) are considered as a perspective basis for performing all kinds of learning tasks - unsupervised, supervised and reinforcement learning.
Learning in SNN is implemented through synaptic plasticity - the rules which determine dynamics of synaptic weights depending usually on activity of the pre- and post-synaptic neurons.
arXiv Detail & Related papers (2021-11-12T15:26:52Z) - Epigenetic evolution of deep convolutional models [81.21462458089142]
We build upon a previously proposed neuroevolution framework to evolve deep convolutional models.
We propose a convolutional layer layout which allows kernels of different shapes and sizes to coexist within the same layer.
The proposed layout enables the size and shape of individual kernels within a convolutional layer to be evolved with a corresponding new mutation operator.
arXiv Detail & Related papers (2021-04-12T12:45:16Z) - Energy Decay Network (EDeN) [0.0]
The Framework attempts to develop a genetic transfer of experience through potential structural expressions.
Successful routes are defined by stability of the spike distribution per epoch.
arXiv Detail & Related papers (2021-03-10T23:17:59Z) - Emergent Hand Morphology and Control from Optimizing Robust Grasps of
Diverse Objects [63.89096733478149]
We introduce a data-driven approach where effective hand designs naturally emerge for the purpose of grasping diverse objects.
We develop a novel Bayesian Optimization algorithm that efficiently co-designs the morphology and grasping skills jointly.
We demonstrate the effectiveness of our approach in discovering robust and cost-efficient hand morphologies for grasping novel objects.
arXiv Detail & Related papers (2020-12-22T17:52:29Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.