Evolving Neuronal Plasticity Rules using Cartesian Genetic Programming
- URL: http://arxiv.org/abs/2102.04312v1
- Date: Mon, 8 Feb 2021 16:17:15 GMT
- Title: Evolving Neuronal Plasticity Rules using Cartesian Genetic Programming
- Authors: Henrik D. Mettler, Maximilian Schmidt, Walter Senn, Mihai A.
Petrovici, Jakob Jordan
- Abstract summary: We employ genetic programming to evolve biologically plausible human-interpretable plasticity rules.
We demonstrate that the evolved rules perform competitively with known hand-designed solutions.
- Score: 1.1980325577555802
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We formulate the search for phenomenological models of synaptic plasticity as
an optimization problem. We employ Cartesian genetic programming to evolve
biologically plausible human-interpretable plasticity rules that allow a given
network to successfully solve tasks from specific task families. While our
evolving-to-learn approach can be applied to various learning paradigms, here
we illustrate its power by evolving plasticity rules that allow a network to
efficiently determine the first principal component of its input distribution.
We demonstrate that the evolved rules perform competitively with known
hand-designed solutions. We explore how the statistical properties of the
datasets used during the evolutionary search influences the form of the
plasticity rules and discover new rules which are adapted to the structure of
the corresponding datasets.
Related papers
- Network bottlenecks and task structure control the evolution of interpretable learning rules in a foraging agent [0.0]
We study meta-learning via evolutionary optimization of simple reward-modulated plasticity rules in embodied agents.
We show that unconstrained meta-learning leads to the emergence of diverse plasticity rules.
Our findings indicate that the meta-learning of plasticity rules is very sensitive to various parameters, with this sensitivity possibly reflected in the learning rules found in biological networks.
arXiv Detail & Related papers (2024-03-20T14:57:02Z) - Learnable Topological Features for Phylogenetic Inference via Graph
Neural Networks [7.310488568715925]
We propose a novel structural representation method for phylogenetic inference based on learnable topological features.
By combining the raw node features that minimize the Dirichlet energy with modern graph representation learning techniques, our learnable topological features can provide efficient structural information of phylogenetic trees.
arXiv Detail & Related papers (2023-02-17T12:26:03Z) - Evolving-to-Learn Reinforcement Learning Tasks with Spiking Neural
Networks [0.0]
We introduce an evolutionary algorithm that evolves suitable synaptic plasticity rules for the task at hand.
We find learning rules that successfully solve an XOR and cart-pole task, and discover new learning rules that outperform the baseline rules from literature.
arXiv Detail & Related papers (2022-02-24T19:07:23Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - A Spiking Neuron Synaptic Plasticity Model Optimized for Unsupervised
Learning [0.0]
Spiking neural networks (SNN) are considered as a perspective basis for performing all kinds of learning tasks - unsupervised, supervised and reinforcement learning.
Learning in SNN is implemented through synaptic plasticity - the rules which determine dynamics of synaptic weights depending usually on activity of the pre- and post-synaptic neurons.
arXiv Detail & Related papers (2021-11-12T15:26:52Z) - Analogous to Evolutionary Algorithm: Designing a Unified Sequence Model [58.17021225930069]
We explain the rationality of Vision Transformer by analogy with the proven practical Evolutionary Algorithm (EA)
We propose a more efficient EAT model, and design task-related heads to deal with different tasks more flexibly.
Our approach achieves state-of-the-art results on the ImageNet classification task compared with recent vision transformer works.
arXiv Detail & Related papers (2021-05-31T16:20:03Z) - Epigenetic evolution of deep convolutional models [81.21462458089142]
We build upon a previously proposed neuroevolution framework to evolve deep convolutional models.
We propose a convolutional layer layout which allows kernels of different shapes and sizes to coexist within the same layer.
The proposed layout enables the size and shape of individual kernels within a convolutional layer to be evolved with a corresponding new mutation operator.
arXiv Detail & Related papers (2021-04-12T12:45:16Z) - Energy Decay Network (EDeN) [0.0]
The Framework attempts to develop a genetic transfer of experience through potential structural expressions.
Successful routes are defined by stability of the spike distribution per epoch.
arXiv Detail & Related papers (2021-03-10T23:17:59Z) - Emergent Hand Morphology and Control from Optimizing Robust Grasps of
Diverse Objects [63.89096733478149]
We introduce a data-driven approach where effective hand designs naturally emerge for the purpose of grasping diverse objects.
We develop a novel Bayesian Optimization algorithm that efficiently co-designs the morphology and grasping skills jointly.
We demonstrate the effectiveness of our approach in discovering robust and cost-efficient hand morphologies for grasping novel objects.
arXiv Detail & Related papers (2020-12-22T17:52:29Z) - Learning Connectivity of Neural Networks from a Topological Perspective [80.35103711638548]
We propose a topological perspective to represent a network into a complete graph for analysis.
By assigning learnable parameters to the edges which reflect the magnitude of connections, the learning process can be performed in a differentiable manner.
This learning process is compatible with existing networks and owns adaptability to larger search spaces and different tasks.
arXiv Detail & Related papers (2020-08-19T04:53:31Z) - Rewriting a Deep Generative Model [56.91974064348137]
We introduce a new problem setting: manipulation of specific rules encoded by a deep generative model.
We propose a formulation in which the desired rule is changed by manipulating a layer of a deep network as a linear associative memory.
We present a user interface to enable users to interactively change the rules of a generative model to achieve desired effects.
arXiv Detail & Related papers (2020-07-30T17:58:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.