Explanatory models in neuroscience: Part 2 -- constraint-based
intelligibility
- URL: http://arxiv.org/abs/2104.01489v2
- Date: Wed, 14 Apr 2021 16:59:48 GMT
- Title: Explanatory models in neuroscience: Part 2 -- constraint-based
intelligibility
- Authors: Rosa Cao and Daniel Yamins
- Abstract summary: Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how models explain.
In biological systems, many of these dependencies are naturally "top-down"
We show how the optimization techniques used to construct NN models capture some key aspects of these dependencies.
- Score: 8.477619837043214
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Computational modeling plays an increasingly important role in neuroscience,
highlighting the philosophical question of how computational models explain. In
the context of neural network models for neuroscience, concerns have been
raised about model intelligibility, and how they relate (if at all) to what is
found in the brain. We claim that what makes a system intelligible is an
understanding of the dependencies between its behavior and the factors that are
causally responsible for that behavior. In biological systems, many of these
dependencies are naturally "top-down": ethological imperatives interact with
evolutionary and developmental constraints under natural selection. We describe
how the optimization techniques used to construct NN models capture some key
aspects of these dependencies, and thus help explain why brain systems are as
they are -- because when a challenging ecologically-relevant goal is shared by
a NN and the brain, it places tight constraints on the possible mechanisms
exhibited in both kinds of systems. By combining two familiar modes of
explanation -- one based on bottom-up mechanism (whose relation to neural
network models we address in a companion paper) and the other on top-down
constraints, these models illuminate brain function.
Related papers
- Enhancing learning in artificial neural networks through cellular heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Two-compartment neuronal spiking model expressing brain-state specific apical-amplification, -isolation and -drive regimes [0.7255608805275865]
Brain-state-specific neural mechanisms play a crucial role in integrating past and contextual knowledge with the current, incoming flow of evidence.
This work aims to provide a two-compartment spiking neuron model that incorporates features essential for supporting brain-state-specific learning.
arXiv Detail & Related papers (2023-11-10T14:16:46Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - From internal models toward metacognitive AI [0.0]
In the prefrontal cortex, a distributed executive network called the "cognitive reality monitoring network" orchestrates conscious involvement of generative-inverse model pairs.
A high responsibility signal is given to the pairs that best capture the external world.
consciousness is determined by the entropy of responsibility signals across all pairs.
arXiv Detail & Related papers (2021-09-27T05:00:56Z) - Improving Coherence and Consistency in Neural Sequence Models with
Dual-System, Neuro-Symbolic Reasoning [49.6928533575956]
We use neural inference to mediate between the neural System 1 and the logical System 2.
Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
arXiv Detail & Related papers (2021-07-06T17:59:49Z) - The Causal Neural Connection: Expressiveness, Learnability, and
Inference [125.57815987218756]
An object called structural causal model (SCM) represents a collection of mechanisms and sources of random variation of the system under investigation.
In this paper, we show that the causal hierarchy theorem (Thm. 1, Bareinboim et al., 2020) still holds for neural models.
We introduce a special type of SCM called a neural causal model (NCM), and formalize a new type of inductive bias to encode structural constraints necessary for performing causal inferences.
arXiv Detail & Related papers (2021-07-02T01:55:18Z) - Explanatory models in neuroscience: Part 1 -- taking mechanistic
abstraction seriously [8.477619837043214]
Critics worry that neural network models fail to illuminate brain function.
We argue that certain kinds of neural network models are actually good examples of mechanistic models.
arXiv Detail & Related papers (2021-04-03T22:17:40Z) - A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations [4.416484585765028]
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence.
A computational model was proposed to simulate the network of neurons in brain and how they process information.
arXiv Detail & Related papers (2020-11-27T01:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.