Condition Integration Memory Network: An Interpretation of the Meaning
of the Neuronal Design
- URL: http://arxiv.org/abs/2106.05181v2
- Date: Mon, 6 Sep 2021 06:27:24 GMT
- Title: Condition Integration Memory Network: An Interpretation of the Meaning
of the Neuronal Design
- Authors: Cheng Qian
- Abstract summary: This document introduces a hypothetical framework for the functional nature of primitive neural networks.
It analyzes the idea that the activity of neurons and synapses can symbolically reenact the dynamic changes in the world.
It achieves this without participating in an algorithmic structure.
- Score: 10.421465303670638
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Understanding the basic operational logics of the nervous system is essential
to advancing neuroscientific research. However, theoretical efforts to tackle
this fundamental problem are lacking, despite the abundant empirical data about
the brain that has been collected in the past few decades. To address this
shortcoming, this document introduces a hypothetical framework for the
functional nature of primitive neural networks. It analyzes the idea that the
activity of neurons and synapses can symbolically reenact the dynamic changes
in the world and thus enable an adaptive system of behavior. More
significantly, the network achieves this without participating in an
algorithmic structure. When a neuron's activation represents some symbolic
element in the environment, each of its synapses can indicate a potential
change to the element and its future state. The efficacy of a synaptic
connection further specifies the element's particular probability for, or
contribution to, such a change. As it fires, a neuron's activation is
transformed to its postsynaptic targets, resulting in a chronological shift of
the represented elements. As the inherent function of summation in a neuron
integrates the various presynaptic contributions, the neural network mimics the
collective causal relationship of events in the observed environment.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Learn to integrate parts for whole through correlated neural variability [8.173681663544757]
Sensory perception originates from the responses of sensory neurons, which react to a collection of sensory signals linked to physical attributes of a singular perceptual object.
Unraveling how the brain extracts perceptual information from these neuronal responses is a pivotal challenge in both computational neuroscience and machine learning.
We introduce a statistical mechanical theory, where perceptual information is first encoded in the correlated variability of sensory neurons and then reformatted into the firing rates of downstream neurons.
arXiv Detail & Related papers (2024-01-01T13:05:29Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Astrocytes as a mechanism for meta-plasticity and contextually-guided
network function [2.66269503676104]
Astrocytes are a ubiquitous and enigmatic type of non-neuronal cell.
Astrocytes may play a more direct and active role in brain function and neural computation.
arXiv Detail & Related papers (2023-11-06T20:31:01Z) - Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern
Recognizers [0.0]
thesis is focused on the central idea that single neurons in the brain should be regarded as temporally highly complex-temporal pattern recognizers.
In chapter 2 we demonstrate that single neurons can generate temporally precise output patterns in response to specific-temporal input patterns.
In chapter 3, we use the differentiable deep network of a realistic cortical neuron as a tool to approximate the implications of the output of the neuron.
arXiv Detail & Related papers (2023-09-26T17:32:08Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Continuous Learning and Adaptation with Membrane Potential and
Activation Threshold Homeostasis [91.3755431537592]
This paper presents the Membrane Potential and Activation Threshold Homeostasis (MPATH) neuron model.
The model allows neurons to maintain a form of dynamic equilibrium by automatically regulating their activity when presented with input.
Experiments demonstrate the model's ability to adapt to and continually learn from its input.
arXiv Detail & Related papers (2021-04-22T04:01:32Z) - Formalising the Use of the Activation Function in Neural Inference [0.0]
We discuss how a spike in a biological neurone belongs to a particular class of phase transitions in statistical physics.
We show that the artificial neurone is, mathematically, a mean field model of biological neural membrane dynamics.
This allows us to treat selective neural firing in an abstract way, and formalise the role of the activation function in perceptron learning.
arXiv Detail & Related papers (2021-02-02T19:42:21Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.