Brain in the Dark: Design Principles for Neuro-mimetic Learning and
Inference
- URL: http://arxiv.org/abs/2307.08613v1
- Date: Fri, 14 Jul 2023 15:04:21 GMT
- Title: Brain in the Dark: Design Principles for Neuro-mimetic Learning and
Inference
- Authors: Mehran H. Bazargani, Szymon Urbas, Karl Friston
- Abstract summary: An approach to modelling this inference is to assume that the brain has a generative model of the world, which it can invert to infer the hidden causes behind its sensory stimuli, that is, perception.
This assumption raises key questions: how to formulate the problem of designing brain-inspired generative models, how to invert them for the tasks of inference and learning, and what is the appropriate loss function to be optimised.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Even though the brain operates in pure darkness, within the skull, it can
infer the most likely causes of its sensory input. An approach to modelling
this inference is to assume that the brain has a generative model of the world,
which it can invert to infer the hidden causes behind its sensory stimuli, that
is, perception. This assumption raises key questions: how to formulate the
problem of designing brain-inspired generative models, how to invert them for
the tasks of inference and learning, what is the appropriate loss function to
be optimised, and, most importantly, what are the different choices of mean
field approximation (MFA) and their implications for variational inference
(VI).
Related papers
- Brain-like Functional Organization within Large Language Models [58.93629121400745]
The human brain has long inspired the pursuit of artificial intelligence (AI)
Recent neuroimaging studies provide compelling evidence of alignment between the computational representation of artificial neural networks (ANNs) and the neural responses of the human brain to stimuli.
In this study, we bridge this gap by directly coupling sub-groups of artificial neurons with functional brain networks (FBNs)
This framework links the AN sub-groups to FBNs, enabling the delineation of brain-like functional organization within large language models (LLMs)
arXiv Detail & Related papers (2024-10-25T13:15:17Z) - Coin-Flipping In The Brain: Statistical Learning with Neuronal Assemblies [9.757971977909683]
We study the emergence of statistical learning in NEMO, a computational model of the brain.
We show that connections between assemblies record statistics, and ambient noise can be harnessed to make probabilistic choices.
arXiv Detail & Related papers (2024-06-11T20:51:50Z) - MindBridge: A Cross-Subject Brain Decoding Framework [60.58552697067837]
Brain decoding aims to reconstruct stimuli from acquired brain signals.
Currently, brain decoding is confined to a per-subject-per-model paradigm.
We present MindBridge, that achieves cross-subject brain decoding by employing only one model.
arXiv Detail & Related papers (2024-04-11T15:46:42Z) - How does the primate brain combine generative and discriminative
computations in vision? [4.691670689443386]
Two contrasting conceptions of the inference process have each been influential in research on biological vision and machine vision.
We show that vision inverts a generative model through an interrogation of the evidence in a process often thought to involve top-down predictions of sensory data.
We explain and clarify the terminology, review the key empirical evidence, and propose an empirical research program that transcends and sets the stage for revealing the mysterious hybrid algorithm of primate vision.
arXiv Detail & Related papers (2024-01-11T16:07:58Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Neural Language Models are not Born Equal to Fit Brain Data, but
Training Helps [75.84770193489639]
We examine the impact of test loss, training corpus and model architecture on the prediction of functional Magnetic Resonance Imaging timecourses of participants listening to an audiobook.
We find that untrained versions of each model already explain significant amount of signal in the brain by capturing similarity in brain responses across identical words.
We suggest good practices for future studies aiming at explaining the human language system using neural language models.
arXiv Detail & Related papers (2022-07-07T15:37:17Z) - From internal models toward metacognitive AI [0.0]
In the prefrontal cortex, a distributed executive network called the "cognitive reality monitoring network" orchestrates conscious involvement of generative-inverse model pairs.
A high responsibility signal is given to the pairs that best capture the external world.
consciousness is determined by the entropy of responsibility signals across all pairs.
arXiv Detail & Related papers (2021-09-27T05:00:56Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Explanatory models in neuroscience: Part 2 -- constraint-based
intelligibility [8.477619837043214]
Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how models explain.
In biological systems, many of these dependencies are naturally "top-down"
We show how the optimization techniques used to construct NN models capture some key aspects of these dependencies.
arXiv Detail & Related papers (2021-04-03T22:14:01Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.