Emergent organization of receptive fields in networks of excitatory and
inhibitory neurons
- URL: http://arxiv.org/abs/2205.13614v1
- Date: Thu, 26 May 2022 20:43:14 GMT
- Title: Emergent organization of receptive fields in networks of excitatory and
inhibitory neurons
- Authors: Leon Lufkin, Ashish Puri, Ganlin Song, Xinyi Zhong, John Lafferty
- Abstract summary: Motivated by a leaky integrate-and-fire model of neural waves, we propose an activation model that is more typical of artificial neural networks.
Experiments with a synthetic model of somatosensory input are used to investigate how the network dynamics may affect plasticity of neuronal maps under changes to the inputs.
- Score: 3.674863913115431
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Local patterns of excitation and inhibition that can generate neural waves
are studied as a computational mechanism underlying the organization of
neuronal tunings. Sparse coding algorithms based on networks of excitatory and
inhibitory neurons are proposed that exhibit topographic maps as the receptive
fields are adapted to input stimuli. Motivated by a leaky integrate-and-fire
model of neural waves, we propose an activation model that is more typical of
artificial neural networks. Computational experiments with the activation model
using both natural images and natural language text are presented. In the case
of images, familiar "pinwheel" patterns of oriented edge detectors emerge; in
the case of text, the resulting topographic maps exhibit a 2-dimensional
representation of granular word semantics. Experiments with a synthetic model
of somatosensory input are used to investigate how the network dynamics may
affect plasticity of neuronal maps under changes to the inputs.
Related papers
- Unsupervised representation learning with Hebbian synaptic and structural plasticity in brain-like feedforward neural networks [0.0]
We introduce and evaluate a brain-like neural network model capable of unsupervised representation learning.
The model was tested on a diverse set of popular machine learning benchmarks.
arXiv Detail & Related papers (2024-06-07T08:32:30Z) - Exploring neural oscillations during speech perception via surrogate gradient spiking neural networks [59.38765771221084]
We present a physiologically inspired speech recognition architecture compatible and scalable with deep learning frameworks.
We show end-to-end gradient descent training leads to the emergence of neural oscillations in the central spiking neural network.
Our findings highlight the crucial inhibitory role of feedback mechanisms, such as spike frequency adaptation and recurrent connections, in regulating and synchronising neural activity to improve recognition performance.
arXiv Detail & Related papers (2024-04-22T09:40:07Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Bio-Inspired Simple Neural Network for Low-Light Image Restoration: A
Minimalist Approach [8.75682288556859]
In this study, we explore the potential of using a straightforward neural network inspired by the retina model to efficiently restore low-light images.
Our proposed neural network model reduces the computational overhead compared to traditional signal-processing models.
arXiv Detail & Related papers (2023-05-03T01:16:45Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Low-Light Image Restoration Based on Retina Model using Neural Networks [0.0]
The proposed neural network model saves the cost of computational overhead in contrast with traditional signal-processing models, and generates results comparable with complicated deep learning models from the subjective perspective.
This work shows that to directly simulate the functionalities of retinal neurons using neural networks not only avoids the manually seeking for the optimal parameters, but also paves the way to build corresponding artificial versions for certain neurobiological organizations.
arXiv Detail & Related papers (2022-10-04T08:14:49Z) - Brain Cortical Functional Gradients Predict Cortical Folding Patterns
via Attention Mesh Convolution [51.333918985340425]
We develop a novel attention mesh convolution model to predict cortical gyro-sulcal segmentation maps on individual brains.
Experiments show that the prediction performance via our model outperforms other state-of-the-art models.
arXiv Detail & Related papers (2022-05-21T14:08:53Z) - Functional2Structural: Cross-Modality Brain Networks Representation
Learning [55.24969686433101]
Graph mining on brain networks may facilitate the discovery of novel biomarkers for clinical phenotypes and neurodegenerative diseases.
We propose a novel graph learning framework, known as Deep Signed Brain Networks (DSBN), with a signed graph encoder.
We validate our framework on clinical phenotype and neurodegenerative disease prediction tasks using two independent, publicly available datasets.
arXiv Detail & Related papers (2022-05-06T03:45:36Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Drop, Swap, and Generate: A Self-Supervised Approach for Generating
Neural Activity [33.06823702945747]
We introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE.
Our approach combines a generative modeling framework with an instance-specific alignment loss.
We show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
arXiv Detail & Related papers (2021-11-03T16:39:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.