Explanatory models in neuroscience: Part 1 -- taking mechanistic
abstraction seriously
- URL: http://arxiv.org/abs/2104.01490v2
- Date: Sat, 10 Apr 2021 23:39:21 GMT
- Title: Explanatory models in neuroscience: Part 1 -- taking mechanistic
abstraction seriously
- Authors: Rosa Cao and Daniel Yamins
- Abstract summary: Critics worry that neural network models fail to illuminate brain function.
We argue that certain kinds of neural network models are actually good examples of mechanistic models.
- Score: 8.477619837043214
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Despite the recent success of neural network models in mimicking animal
performance on visual perceptual tasks, critics worry that these models fail to
illuminate brain function. We take it that a central approach to explanation in
systems neuroscience is that of mechanistic modeling, where understanding the
system is taken to require fleshing out the parts, organization, and activities
of a system, and how those give rise to behaviors of interest. However, it
remains somewhat controversial what it means for a model to describe a
mechanism, and whether neural network models qualify as explanatory.
We argue that certain kinds of neural network models are actually good
examples of mechanistic models, when the right notion of mechanistic mapping is
deployed. Building on existing work on model-to-mechanism mapping (3M), we
describe criteria delineating such a notion, which we call 3M++. These criteria
require us, first, to identify a level of description that is both abstract but
detailed enough to be "runnable", and then, to construct model-to-brain
mappings using the same principles as those employed for brain-to-brain mapping
across individuals. Perhaps surprisingly, the abstractions required are those
already in use in experimental neuroscience, and are of the kind deployed in
the construction of more familiar computational models, just as the principles
of inter-brain mappings are very much in the spirit of those already employed
in the collection and analysis of data across animals.
In a companion paper, we address the relationship between optimization and
intelligibility, in the context of functional evolutionary explanations. Taken
together, mechanistic interpretations of computational models and the
dependencies between form and function illuminated by optimization processes
can help us to understand why brain systems are built they way they are.
Related papers
- Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations [52.48094670415497]
We develop a theory of when biologically inspired representations modularise with respect to source variables (sources)
We derive necessary and sufficient conditions on a sample of sources that determine whether the neurons in an optimal biologically-inspired linear autoencoder modularise.
Our theory applies to any dataset, extending far beyond the case of statistical independence studied in previous work.
arXiv Detail & Related papers (2024-10-08T17:41:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Generative Models as a Complex Systems Science: How can we make sense of
large language model behavior? [75.79305790453654]
Coaxing out desired behavior from pretrained models, while avoiding undesirable ones, has redefined NLP.
We argue for a systematic effort to decompose language model behavior into categories that explain cross-task performance.
arXiv Detail & Related papers (2023-07-31T22:58:41Z) - Mapping and Validating a Point Neuron Model on Intel's Neuromorphic
Hardware Loihi [77.34726150561087]
We investigate the potential of Intel's fifth generation neuromorphic chip - Loihi'
Loihi is based on the novel idea of Spiking Neural Networks (SNNs) emulating the neurons in the brain.
We find that Loihi replicates classical simulations very efficiently and scales notably well in terms of both time and energy performance as the networks get larger.
arXiv Detail & Related papers (2021-09-22T16:52:51Z) - Meta-brain Models: biologically-inspired cognitive agents [0.0]
We propose a computational approach we call meta-brain models.
We will propose combinations of layers composed using specialized types of models.
We will conclude by proposing next steps in the development of this flexible and open-source approach.
arXiv Detail & Related papers (2021-08-31T05:20:53Z) - Deep Reinforcement Learning Models Predict Visual Responses in the
Brain: A Preliminary Result [1.0323063834827415]
We use reinforcement learning to train neural network models to play a 3D computer game.
We find that these reinforcement learning models achieve neural response prediction accuracy scores in the early visual areas.
In contrast, the supervised neural network models yield better neural response predictions in the higher visual areas.
arXiv Detail & Related papers (2021-06-18T13:10:06Z) - Information theoretic analysis of computational models as a tool to
understand the neural basis of behaviors [0.0]
One of the greatest research challenges of this century is to understand the neural basis for how behavior emerges in brain-body-environment systems.
Computational models provide an alternative framework within which one can study model systems.
I provide an introduction, a review and discussion to make a case for how information theoretic analysis of computational models is a potent research methodology.
arXiv Detail & Related papers (2021-06-02T02:08:18Z) - Explanatory models in neuroscience: Part 2 -- constraint-based
intelligibility [8.477619837043214]
Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how models explain.
In biological systems, many of these dependencies are naturally "top-down"
We show how the optimization techniques used to construct NN models capture some key aspects of these dependencies.
arXiv Detail & Related papers (2021-04-03T22:14:01Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations [4.416484585765028]
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence.
A computational model was proposed to simulate the network of neurons in brain and how they process information.
arXiv Detail & Related papers (2020-11-27T01:34:08Z) - Towards a Neural Model for Serial Order in Frontal Cortex: a Brain
Theory from Memory Development to Higher-Level Cognition [53.816853325427424]
We propose that the immature prefrontal cortex (PFC) use its primary functionality of detecting hierarchical patterns in temporal signals.
Our hypothesis is that the PFC detects the hierarchical structure in temporal sequences in the form of ordinal patterns and use them to index information hierarchically in different parts of the brain.
By doing so, it gives the tools to the language-ready brain for manipulating abstract knowledge and planning temporally ordered information.
arXiv Detail & Related papers (2020-05-22T14:29:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.