Does the Brain Infer Invariance Transformations from Graph Symmetries?
- URL: http://arxiv.org/abs/2111.06174v2
- Date: Sat, 27 May 2023 17:47:01 GMT
- Title: Does the Brain Infer Invariance Transformations from Graph Symmetries?
- Authors: Helmut Linde
- Abstract summary: The invariance of natural objects under perceptual changes is possibly encoded in the brain by symmetries in the graph of synaptic connections.
The graph can be established via unsupervised learning in a biologically plausible process across different perceptual modalities.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The invariance of natural objects under perceptual changes is possibly
encoded in the brain by symmetries in the graph of synaptic connections. The
graph can be established via unsupervised learning in a biologically plausible
process across different perceptual modalities. This hypothetical encoding
scheme is supported by the correlation structure of naturalistic audio and
image data and it predicts a neural connectivity architecture which is
consistent with many empirical observations about primary sensory cortex.
Related papers
- Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Learn to integrate parts for whole through correlated neural variability [8.173681663544757]
Sensory perception originates from the responses of sensory neurons, which react to a collection of sensory signals linked to physical attributes of a singular perceptual object.
Unraveling how the brain extracts perceptual information from these neuronal responses is a pivotal challenge in both computational neuroscience and machine learning.
We introduce a statistical mechanical theory, where perceptual information is first encoded in the correlated variability of sensory neurons and then reformatted into the firing rates of downstream neurons.
arXiv Detail & Related papers (2024-01-01T13:05:29Z) - Unsupervised Learning of Invariance Transformations [105.54048699217668]
We develop an algorithmic framework for finding approximate graph automorphisms.
We discuss how this framework can be used to find approximate automorphisms in weighted graphs in general.
arXiv Detail & Related papers (2023-07-24T17:03:28Z) - Brain Cortical Functional Gradients Predict Cortical Folding Patterns
via Attention Mesh Convolution [51.333918985340425]
We develop a novel attention mesh convolution model to predict cortical gyro-sulcal segmentation maps on individual brains.
Experiments show that the prediction performance via our model outperforms other state-of-the-art models.
arXiv Detail & Related papers (2022-05-21T14:08:53Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Drop, Swap, and Generate: A Self-Supervised Approach for Generating
Neural Activity [33.06823702945747]
We introduce a novel unsupervised approach for learning disentangled representations of neural activity called Swap-VAE.
Our approach combines a generative modeling framework with an instance-specific alignment loss.
We show that it is possible to build representations that disentangle neural datasets along relevant latent dimensions linked to behavior.
arXiv Detail & Related papers (2021-11-03T16:39:43Z) - Can the brain use waves to solve planning problems? [62.997667081978825]
We present a neural network model which can solve such tasks.
The model is compatible with a broad range of empirical findings about the mammalian neocortex and hippocampus.
arXiv Detail & Related papers (2021-10-11T11:07:05Z) - Deep Graph Random Process for Relational-Thinking-Based Speech
Recognition [12.09786458466155]
relational thinking is characterized by relying on innumerable unconscious percepts pertaining to relations between new sensory signals and prior knowledge.
We present a Bayesian nonparametric deep learning method called deep graph random process (DGP) that can generate an infinite number of probabilistic graphs representing percepts.
Our approach is able to successfully infer relations among utterances without using any relational data during training.
arXiv Detail & Related papers (2020-07-04T15:27:57Z) - Optimal Learning with Excitatory and Inhibitory synapses [91.3755431537592]
I study the problem of storing associations between analog signals in the presence of correlations.
I characterize the typical learning performance in terms of the power spectrum of random input and output processes.
arXiv Detail & Related papers (2020-05-25T18:25:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.