Explanatory models in neuroscience: Part 2 -- constraint-based
intelligibility
- URL: http://arxiv.org/abs/2104.01489v2
- Date: Wed, 14 Apr 2021 16:59:48 GMT
- Title: Explanatory models in neuroscience: Part 2 -- constraint-based
intelligibility
- Authors: Rosa Cao and Daniel Yamins
- Abstract summary: Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how models explain.
In biological systems, many of these dependencies are naturally "top-down"
We show how the optimization techniques used to construct NN models capture some key aspects of these dependencies.
- Score: 8.477619837043214
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Computational modeling plays an increasingly important role in neuroscience,
highlighting the philosophical question of how computational models explain. In
the context of neural network models for neuroscience, concerns have been
raised about model intelligibility, and how they relate (if at all) to what is
found in the brain. We claim that what makes a system intelligible is an
understanding of the dependencies between its behavior and the factors that are
causally responsible for that behavior. In biological systems, many of these
dependencies are naturally "top-down": ethological imperatives interact with
evolutionary and developmental constraints under natural selection. We describe
how the optimization techniques used to construct NN models capture some key
aspects of these dependencies, and thus help explain why brain systems are as
they are -- because when a challenging ecologically-relevant goal is shared by
a NN and the brain, it places tight constraints on the possible mechanisms
exhibited in both kinds of systems. By combining two familiar modes of
explanation -- one based on bottom-up mechanism (whose relation to neural
network models we address in a companion paper) and the other on top-down
constraints, these models illuminate brain function.
Related papers
- Formal Explanations for Neuro-Symbolic AI [28.358183683756028]
This paper proposes a formal approach to explaining the decisions of neuro-symbolic systems.
It first computes a formal explanation for the symbolic component of the system, which serves to identify a subset of the individual parts of neural information that needs to be explained.
This is followed by explaining only those individual neural inputs, independently of each other, which facilitates succinctness of hierarchical formal explanations.
arXiv Detail & Related papers (2024-10-18T07:08:31Z) - Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Synergistic pathways of modulation enable robust task packing within neural dynamics [0.0]
We use recurrent network models to probe the distinctions between two forms of contextual modulation of neural dynamics.
We demonstrate distinction between these mechanisms at the level of the neuronal dynamics they induce.
These characterizations indicate complementarity and synergy in how these mechanisms act, potentially over multiple time-scales.
arXiv Detail & Related papers (2024-08-02T15:12:01Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Improving Coherence and Consistency in Neural Sequence Models with
Dual-System, Neuro-Symbolic Reasoning [49.6928533575956]
We use neural inference to mediate between the neural System 1 and the logical System 2.
Results in robust story generation and grounded instruction-following show that this approach can increase the coherence and accuracy of neurally-based generations.
arXiv Detail & Related papers (2021-07-06T17:59:49Z) - Explanatory models in neuroscience: Part 1 -- taking mechanistic
abstraction seriously [8.477619837043214]
Critics worry that neural network models fail to illuminate brain function.
We argue that certain kinds of neural network models are actually good examples of mechanistic models.
arXiv Detail & Related papers (2021-04-03T22:17:40Z) - A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations [4.416484585765028]
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence.
A computational model was proposed to simulate the network of neurons in brain and how they process information.
arXiv Detail & Related papers (2020-11-27T01:34:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.