Learn to integrate parts for whole through correlated neural variability
- URL: http://arxiv.org/abs/2401.00746v1
- Date: Mon, 1 Jan 2024 13:05:29 GMT
- Title: Learn to integrate parts for whole through correlated neural variability
- Authors: Zhichao Zhu, Yang Qi, Wenlian Lu, Jianfeng Feng
- Abstract summary: Sensory perception originates from the responses of sensory neurons, which react to a collection of sensory signals linked to physical attributes of a singular perceptual object.
Unraveling how the brain extracts perceptual information from these neuronal responses is a pivotal challenge in both computational neuroscience and machine learning.
We introduce a statistical mechanical theory, where perceptual information is first encoded in the correlated variability of sensory neurons and then reformatted into the firing rates of downstream neurons.
- Score: 8.173681663544757
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sensory perception originates from the responses of sensory neurons, which
react to a collection of sensory signals linked to various physical attributes
of a singular perceptual object. Unraveling how the brain extracts perceptual
information from these neuronal responses is a pivotal challenge in both
computational neuroscience and machine learning. Here we introduce a
statistical mechanical theory, where perceptual information is first encoded in
the correlated variability of sensory neurons and then reformatted into the
firing rates of downstream neurons. Applying this theory, we illustrate the
encoding of motion direction using neural covariance and demonstrate
high-fidelity direction recovery by spiking neural networks. Networks trained
under this theory also show enhanced performance in classifying natural images,
achieving higher accuracy and faster inference speed. Our results challenge the
traditional view of neural covariance as a secondary factor in neural coding,
highlighting its potential influence on brain function.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - NeuroBind: Towards Unified Multimodal Representations for Neural Signals [20.02503060795981]
We present NeuroBind, a representation that unifies multiple brain signal types, including EEG, fMRI, calcium imaging, and spiking data.
This approach holds significant potential for advancing neuroscience research, improving AI systems, and developing neuroprosthetics and brain-computer interfaces.
arXiv Detail & Related papers (2024-07-19T04:42:52Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Brain-inspired Graph Spiking Neural Networks for Commonsense Knowledge
Representation and Reasoning [11.048601659933249]
How neural networks in the human brain represent commonsense knowledge is an important research topic in neuroscience, cognitive science, psychology, and artificial intelligence.
This work investigates how population encoding and spiking timing-dependent plasticity (STDP) mechanisms can be integrated into the learning of spiking neural networks.
The neuron populations of different communities together constitute the entire commonsense knowledge graph, forming a giant graph spiking neural network.
arXiv Detail & Related papers (2022-07-11T05:22:38Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Drop, Swap, and Generate: A Self-Supervised Approach for Generating
Neural Activity [33.06823702945747]
We introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE.
Our approach combines a generative modeling framework with an instance-specific alignment loss.
We show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
arXiv Detail & Related papers (2021-11-03T16:39:43Z) - NeuroCartography: Scalable Automatic Visual Summarization of Concepts in
Deep Neural Networks [18.62960153659548]
NeuroCartography is an interactive system that summarizes and visualizes concepts learned by neural networks.
It automatically discovers and groups neurons that detect the same concepts.
It describes how such neuron groups interact to form higher-level concepts and the subsequent predictions.
arXiv Detail & Related papers (2021-08-29T22:43:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.