Evolving spiking neuron cellular automata and networks to emulate in
vitro neuronal activity
- URL: http://arxiv.org/abs/2110.08242v1
- Date: Fri, 15 Oct 2021 17:55:04 GMT
- Title: Evolving spiking neuron cellular automata and networks to emulate in
vitro neuronal activity
- Authors: J{\o}rgen Jensen Farner, H{\aa}kon Weydahl, Ruben Jahren, Ola Huse
Ramstad, Stefano Nichele, Kristine Heiney
- Abstract summary: We produce spiking neural systems that emulate the patterns of behavior of biological neurons in vitro.
Our models were able to produce a level of network-wide synchrony.
The genomes of the top-performing models indicate the excitability and density of connections in the model play an important role in determining the complexity of the produced activity.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuro-inspired models and systems have great potential for applications in
unconventional computing. Often, the mechanisms of biological neurons are
modeled or mimicked in simulated or physical systems in an attempt to harness
some of the computational power of the brain. However, the biological
mechanisms at play in neural systems are complicated and challenging to capture
and engineer; thus, it can be simpler to turn to a data-driven approach to
transfer features of neural behavior to artificial substrates. In the present
study, we used an evolutionary algorithm (EA) to produce spiking neural systems
that emulate the patterns of behavior of biological neurons in vitro. The aim
of this approach was to develop a method of producing models capable of
exhibiting complex behavior that may be suitable for use as computational
substrates. Our models were able to produce a level of network-wide synchrony
and showed a range of behaviors depending on the target data used for their
evolution, which was from a range of neuronal culture densities and maturities.
The genomes of the top-performing models indicate the excitability and density
of connections in the model play an important role in determining the
complexity of the produced activity.
Related papers
- Exploring Biological Neuronal Correlations with Quantum Generative Models [0.0]
We introduce a quantum generative model framework for generating synthetic data that captures the spatial and temporal correlations of biological neuronal activity.
Our model demonstrates the ability to achieve reliable outcomes with fewer trainable parameters compared to classical methods.
arXiv Detail & Related papers (2024-09-13T18:00:06Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Neuroformer: Multimodal and Multitask Generative Pretraining for Brain Data [3.46029409929709]
State-of-the-art systems neuroscience experiments yield large-scale multimodal data, and these data sets require new tools for analysis.
Inspired by the success of large pretrained models in vision and language domains, we reframe the analysis of large-scale, cellular-resolution neuronal spiking data into an autoregressive generation problem.
We first trained Neuroformer on simulated datasets, and found that it both accurately predicted intrinsically simulated neuronal circuit activity, and also inferred the underlying neural circuit connectivity, including direction.
arXiv Detail & Related papers (2023-10-31T20:17:32Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Spatiotemporal Patterns in Neurobiology: An Overview for Future
Artificial Intelligence [0.0]
We argue that computational models are key tools for elucidating possible functionalities that emerge from network interactions.
Here we review several classes of models including spiking neurons, integrate and fire neurons.
We hope these studies will inform future developments in artificial intelligence algorithms as well as help validate our understanding of brain processes.
arXiv Detail & Related papers (2022-03-29T10:28:01Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.