Optimizing Genetically-Driven Synaptogenesis
- URL: http://arxiv.org/abs/2402.07242v1
- Date: Sun, 11 Feb 2024 16:49:12 GMT
- Title: Optimizing Genetically-Driven Synaptogenesis
- Authors: Tommaso Boccato, Matteo Ferrante, Nicola Toschi
- Abstract summary: We introduce SynaptoGen, a novel framework that aims to bridge the gap between genetic manipulations and neuronal network behavior.
To validate SynaptoGen, we conduct a preliminary experiment using reinforcement learning as a benchmark learning framework.
The results highlight the potential of SynaptoGen to inspire further advancements in neuroscience and computational modeling.
- Score: 0.13812010983144798
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: In this paper we introduce SynaptoGen, a novel framework that aims to bridge
the gap between genetic manipulations and neuronal network behavior by
simulating synaptogenesis and guiding the development of neuronal networks
capable of solving predetermined computational tasks. Drawing inspiration from
recent advancements in the field, we propose SynaptoGen as a bio-plausible
approach to modeling synaptogenesis through differentiable functions. To
validate SynaptoGen, we conduct a preliminary experiment using reinforcement
learning as a benchmark learning framework, demonstrating its effectiveness in
generating neuronal networks capable of solving the OpenAI Gym's Cart Pole
task, compared to carefully designed baselines. The results highlight the
potential of SynaptoGen to inspire further advancements in neuroscience and
computational modeling, while also acknowledging the need for incorporating
more realistic genetic rules and synaptic conductances in future research.
Overall, SynaptoGen represents a promising avenue for exploring the
intersection of genetics, neuroscience, and artificial intelligence.
Related papers
- GaNDLF-Synth: A Framework to Democratize Generative AI for (Bio)Medical Imaging [0.36638033546156024]
Generative Artificial Intelligence (GenAI) is a field of AI that creates new data samples from existing ones.
This paper explores the background and motivation for GenAI, and introduces the Generally Nuanced Deep Learning Framework for Synthesis (GaNDLF- Synth)
GaNDLF- Synth describes a unified abstraction for various algorithms, including autoencoders, generative adversarial networks, and synthesis models.
arXiv Detail & Related papers (2024-09-30T19:25:01Z) - Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Evolving spiking neuron cellular automata and networks to emulate in
vitro neuronal activity [0.0]
We produce spiking neural systems that emulate the patterns of behavior of biological neurons in vitro.
Our models were able to produce a level of network-wide synchrony.
The genomes of the top-performing models indicate the excitability and density of connections in the model play an important role in determining the complexity of the produced activity.
arXiv Detail & Related papers (2021-10-15T17:55:04Z) - Complexity-based speciation and genotype representation for
neuroevolution [81.21462458089142]
This paper introduces a speciation principle for neuroevolution where evolving networks are grouped into species based on the number of hidden neurons.
The proposed speciation principle is employed in several techniques designed to promote and preserve diversity within species and in the ecosystem as a whole.
arXiv Detail & Related papers (2020-10-11T06:26:56Z) - A multi-agent model for growing spiking neural networks [0.0]
This project has explored rules for growing the connections between the neurons in Spiking Neural Networks as a learning mechanism.
Results in a simulation environment showed that for a given set of parameters it is possible to reach topologies that reproduce the tested functions.
This project also opens the door to the usage of techniques like genetic algorithms for obtaining the best suited values for the model parameters.
arXiv Detail & Related papers (2020-09-21T15:11:29Z) - Equilibrium Propagation for Complete Directed Neural Networks [0.0]
Most successful learning algorithm for artificial neural networks, backpropagation, is considered biologically implausible.
We contribute to the topic of biologically plausible neuronal learning by building upon and extending the equilibrium propagation learning framework.
arXiv Detail & Related papers (2020-06-15T22:12:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.