Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern
Recognizers
- URL: http://arxiv.org/abs/2309.15090v1
- Date: Tue, 26 Sep 2023 17:32:08 GMT
- Title: Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern
Recognizers
- Authors: David Beniaguev
- Abstract summary: thesis is focused on the central idea that single neurons in the brain should be regarded as temporally highly complex-temporal pattern recognizers.
In chapter 2 we demonstrate that single neurons can generate temporally precise output patterns in response to specific-temporal input patterns.
In chapter 3, we use the differentiable deep network of a realistic cortical neuron as a tool to approximate the implications of the output of the neuron.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This PhD thesis is focused on the central idea that single neurons in the
brain should be regarded as temporally precise and highly complex
spatio-temporal pattern recognizers. This is opposed to the prevalent view of
biological neurons as simple and mainly spatial pattern recognizers by most
neuroscientists today. In this thesis, I will attempt to demonstrate that this
is an important distinction, predominantly because the above-mentioned
computational properties of single neurons have far-reaching implications with
respect to the various brain circuits that neurons compose, and on how
information is encoded by neuronal activity in the brain. Namely, that these
particular "low-level" details at the single neuron level have substantial
system-wide ramifications. In the introduction we will highlight the main
components that comprise a neural microcircuit that can perform useful
computations and illustrate the inter-dependence of these components from a
system perspective. In chapter 1 we discuss the great complexity of the
spatio-temporal input-output relationship of cortical neurons that are the
result of morphological structure and biophysical properties of the neuron. In
chapter 2 we demonstrate that single neurons can generate temporally precise
output patterns in response to specific spatio-temporal input patterns with a
very simple biologically plausible learning rule. In chapter 3, we use the
differentiable deep network analog of a realistic cortical neuron as a tool to
approximate the gradient of the output of the neuron with respect to its input
and use this capability in an attempt to teach the neuron to perform nonlinear
XOR operation. In chapter 4 we expand chapter 3 to describe extension of our
ideas to neuronal networks composed of many realistic biological spiking
neurons that represent either small microcircuits or entire brain regions.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Learn to integrate parts for whole through correlated neural variability [8.173681663544757]
Sensory perception originates from the responses of sensory neurons, which react to a collection of sensory signals linked to physical attributes of a singular perceptual object.
Unraveling how the brain extracts perceptual information from these neuronal responses is a pivotal challenge in both computational neuroscience and machine learning.
We introduce a statistical mechanical theory, where perceptual information is first encoded in the correlated variability of sensory neurons and then reformatted into the firing rates of downstream neurons.
arXiv Detail & Related papers (2024-01-01T13:05:29Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Goal-Driven Approach to Systems Neuroscience [2.6451153531057985]
Humans and animals exhibit a range of interesting behaviors in dynamic environments.
It is unclear how our brains actively reformat this dense sensory information to enable these behaviors.
We offer a new definition of interpretability that we show has promise in yielding unified structural and functional models of neural circuits.
arXiv Detail & Related papers (2023-11-05T16:37:53Z) - Identifying Interpretable Visual Features in Artificial and Biological
Neural Systems [3.604033202771937]
Single neurons in neural networks are often interpretable in that they represent individual, intuitively meaningful features.
Many neurons exhibit $textitmixed selectivity$, i.e., they represent multiple unrelated features.
We propose an automated method for quantifying visual interpretability and an approach for finding meaningful directions in network activation space.
arXiv Detail & Related papers (2023-10-17T17:41:28Z) - The Expressive Leaky Memory Neuron: an Efficient and Expressive Phenomenological Neuron Model Can Solve Long-Horizon Tasks [64.08042492426992]
We introduce the Expressive Memory (ELM) neuron model, a biologically inspired model of a cortical neuron.
Our ELM neuron can accurately match the aforementioned input-output relationship with under ten thousand trainable parameters.
We evaluate it on various tasks with demanding temporal structures, including the Long Range Arena (LRA) datasets.
arXiv Detail & Related papers (2023-06-14T13:34:13Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - POPPINS : A Population-Based Digital Spiking Neuromorphic Processor with
Integer Quadratic Integrate-and-Fire Neurons [50.591267188664666]
We propose a population-based digital spiking neuromorphic processor in 180nm process technology with two hierarchy populations.
The proposed approach enables the developments of biomimetic neuromorphic system and various low-power, and low-latency inference processing applications.
arXiv Detail & Related papers (2022-01-19T09:26:34Z) - Rapid detection and recognition of whole brain activity in a freely
behaving Caenorhabditis elegans [18.788855494800238]
We propose a cascade solution for long-term and rapid recognition of head ganglion neurons in a freely moving textitC. elegans.
Under the constraint of a small number of training samples, our bottom-up approach is able to process each volume - $1024 times 1024 times 18$ in voxels - in less than 1 second.
Our work represents an important development towards a rapid and fully automated algorithm for decoding whole brain activity underlying natural animal behaviors.
arXiv Detail & Related papers (2021-09-22T01:33:54Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.