Inferring Inference
- URL: http://arxiv.org/abs/2310.03186v3
- Date: Fri, 13 Oct 2023 22:04:12 GMT
- Title: Inferring Inference
- Authors: Rajkumar Vasudeva Raju, Zhe Li, Scott Linderman, Xaq Pitkow
- Abstract summary: We develop a framework for inferring canonical distributed computations from large-scale neural activity patterns.
We simulate recordings for a model brain that implicitly implements an approximate inference algorithm on a probabilistic graphical model.
Overall, this framework provides a new tool for discovering interpretable structure in neural recordings.
- Score: 7.11780383076327
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Patterns of microcircuitry suggest that the brain has an array of repeated
canonical computational units. Yet neural representations are distributed, so
the relevant computations may only be related indirectly to single-neuron
transformations. It thus remains an open challenge how to define canonical
distributed computations. We integrate normative and algorithmic theories of
neural computation into a mathematical framework for inferring canonical
distributed computations from large-scale neural activity patterns. At the
normative level, we hypothesize that the brain creates a structured internal
model of its environment, positing latent causes that explain its sensory
inputs, and uses those sensory inputs to infer the latent causes. At the
algorithmic level, we propose that this inference process is a nonlinear
message-passing algorithm on a graph-structured model of the world. Given a
time series of neural activity during a perceptual inference task, our
framework finds (i) the neural representation of relevant latent variables,
(ii) interactions between these variables that define the brain's internal
model of the world, and (iii) message-functions specifying the inference
algorithm. These targeted computational properties are then statistically
distinguishable due to the symmetries inherent in any canonical computation, up
to a global transformation. As a demonstration, we simulate recordings for a
model brain that implicitly implements an approximate inference algorithm on a
probabilistic graphical model. Given its external inputs and noisy neural
activity, we recover the latent variables, their neural representation and
dynamics, and canonical message-functions. We highlight features of
experimental design needed to successfully extract canonical computations from
neural data. Overall, this framework provides a new tool for discovering
interpretable structure in neural recordings.
Related papers
- Neural timescales from a computational perspective [5.390514665166601]
Timescales of neural activity are diverse across and within brain areas, and experimental observations suggest that neural timescales reflect information in dynamic environments.
Here, we take a complementary perspective and synthesize three directions where computational methods can distill the broad set of empirical observations into quantitative and testable theories.
arXiv Detail & Related papers (2024-09-04T13:16:20Z) - Latent Variable Sequence Identification for Cognitive Models with Neural Bayes Estimation [7.7227297059345466]
We present an approach that extends neural Bayes estimation to learn a direct mapping between experimental data and the targeted latent variable space.
Our work underscores that combining recurrent neural networks and simulation-based inference to identify latent variable sequences can enable researchers to access a wider class of cognitive models.
arXiv Detail & Related papers (2024-06-20T21:13:39Z) - Linearization Turns Neural Operators into Function-Valued Gaussian Processes [23.85470417458593]
We introduce a new framework for approximate Bayesian uncertainty quantification in neural operators.
Our approach can be interpreted as a probabilistic analogue of the concept of currying from functional programming.
We showcase the efficacy of our approach through applications to different types of partial differential equations.
arXiv Detail & Related papers (2024-06-07T16:43:54Z) - Graph Neural Networks for Learning Equivariant Representations of Neural Networks [55.04145324152541]
We propose to represent neural networks as computational graphs of parameters.
Our approach enables a single model to encode neural computational graphs with diverse architectures.
We showcase the effectiveness of our method on a wide range of tasks, including classification and editing of implicit neural representations.
arXiv Detail & Related papers (2024-03-18T18:01:01Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Invariants for neural automata [0.0]
We develop a formal framework for the investigation of symmetries and invariants of neural automata under different encodings.
Our work could be of substantial importance for related regression studies of real-world measurements with neurosymbolic processors.
arXiv Detail & Related papers (2023-02-04T11:40:40Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations [4.416484585765028]
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence.
A computational model was proposed to simulate the network of neurons in brain and how they process information.
arXiv Detail & Related papers (2020-11-27T01:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.