Sketch of a novel approach to a neural model
- URL: http://arxiv.org/abs/2209.06865v1
- Date: Wed, 14 Sep 2022 18:28:39 GMT
- Title: Sketch of a novel approach to a neural model
- Authors: Gabriele Scheler
- Abstract summary: We lay out a novel model of neuroplasticity in the form of a horizontal-external integration model of neural processing.
We believe a new approach to neural neuroscience will benefit the 3rd wave of AI.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we lay out a novel model of neuroplasticity in the form of a
horizontal-vertical integration model of neural processing. We believe a new
approach to neural modeling will benefit the 3rd wave of AI. The horizontal
plane consists of an adaptive network of neurons connected by transmission
links which generates spatio-temporal spike patterns. This fits with standard
computational neuroscience approaches. Additionally for each individual neuron
there is a vertical part consisting of internal adaptive parameters steering
the external membrane-expressed parameters which are involved in neural
transmission. Each neuron has a vertical modular system of parameters
corresponding to (a) external parameters at the membrane layer, divided into
compartments (spines, boutons) (b) internal parameters in the submembrane zone
and the cytoplasm with its protein signaling network and (c) core parameters in
the nucleus for genetic and epigenetic information. In such models, each node
(=neuron) in the horizontal network has its own internal memory. Neural
transmission and information storage are systematically separated, an important
conceptual advance over synaptic weight models. We discuss the membrane-based
(external) filtering and selection of outside signals for processing vs. signal
loss by fast fluctuations and the neuron-internal computing strategies from
intracellular protein signaling to the nucleus as the core system. We want to
show that the individual neuron has an important role in the computation of
signals and that many assumptions derived from the synaptic weight adjustment
hypothesis of memory may not hold in a real brain. Not every transmission event
leaves a trace and the neuron is a self-programming device, rather than
passively determined by current input. Ultimately we strive to build a flexible
memory system that processes facts and events automatically.
Related papers
- The Neuron as a Direct Data-Driven Controller [43.8450722109081]
This study extends the current normative models, which primarily optimize prediction, by conceptualizing neurons as optimal feedback controllers.
We model neurons as biologically feasible controllers which implicitly identify loop dynamics, infer latent states and optimize control.
Our model presents a significant departure from the traditional, feedforward, instant-response McCulloch-Pitts-Rosenblatt neuron, offering a novel and biologically-informed fundamental unit for constructing neural networks.
arXiv Detail & Related papers (2024-01-03T01:24:10Z) - Joint Learning Neuronal Skeleton and Brain Circuit Topology with Permutation Invariant Encoders for Neuron Classification [33.47541392305739]
We propose NeuNet framework that combines morphological information of neurons obtained from skeleton and topological information between neurons obtained from neural circuit.
We reprocess and release two new datasets for neuron classification task from volume electron microscopy(VEM) images of human brain cortex and Drosophila brain.
arXiv Detail & Related papers (2023-12-22T08:31:11Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Contrastive-Signal-Dependent Plasticity: Forward-Forward Learning of
Spiking Neural Systems [73.18020682258606]
We develop a neuro-mimetic architecture, composed of spiking neuronal units, where individual layers of neurons operate in parallel.
We propose an event-based generalization of forward-forward learning, which we call contrastive-signal-dependent plasticity (CSDP)
Our experimental results on several pattern datasets demonstrate that the CSDP process works well for training a dynamic recurrent spiking network capable of both classification and reconstruction.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.