Disentanglement with Biological Constraints: A Theory of Functional Cell
Types
- URL: http://arxiv.org/abs/2210.01768v2
- Date: Fri, 31 Mar 2023 18:41:15 GMT
- Title: Disentanglement with Biological Constraints: A Theory of Functional Cell
Types
- Authors: James C.R. Whittington, Will Dorrell, Surya Ganguli, Timothy E.J.
Behrens
- Abstract summary: This work provides a mathematical understanding of why single neurons in the brain often represent single human-interpretable factors.
It also steps towards an understanding task structure shapes the structure of brain representation.
- Score: 20.929056085868613
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neurons in the brain are often finely tuned for specific task variables.
Moreover, such disentangled representations are highly sought after in machine
learning. Here we mathematically prove that simple biological constraints on
neurons, namely nonnegativity and energy efficiency in both activity and
weights, promote such sought after disentangled representations by enforcing
neurons to become selective for single factors of task variation. We
demonstrate these constraints lead to disentanglement in a variety of tasks and
architectures, including variational autoencoders. We also use this theory to
explain why the brain partitions its cells into distinct cell types such as
grid and object-vector cells, and also explain when the brain instead entangles
representations in response to entangled task factors. Overall, this work
provides a mathematical understanding of why single neurons in the brain often
represent single human-interpretable factors, and steps towards an
understanding task structure shapes the structure of brain representation.
Related papers
- Don't Cut Corners: Exact Conditions for Modularity in Biologically Inspired Representations [52.48094670415497]
We develop a theory of when biologically inspired representations modularise with respect to source variables (sources)
We derive necessary and sufficient conditions on a sample of sources that determine whether the neurons in an optimal biologically-inspired linear autoencoder modularise.
Our theory applies to any dataset, extending far beyond the case of statistical independence studied in previous work.
arXiv Detail & Related papers (2024-10-08T17:41:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Goal-Driven Approach to Systems Neuroscience [2.6451153531057985]
Humans and animals exhibit a range of interesting behaviors in dynamic environments.
It is unclear how our brains actively reformat this dense sensory information to enable these behaviors.
We offer a new definition of interpretability that we show has promise in yielding unified structural and functional models of neural circuits.
arXiv Detail & Related papers (2023-11-05T16:37:53Z) - Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern
Recognizers [0.0]
thesis is focused on the central idea that single neurons in the brain should be regarded as temporally highly complex-temporal pattern recognizers.
In chapter 2 we demonstrate that single neurons can generate temporally precise output patterns in response to specific-temporal input patterns.
In chapter 3, we use the differentiable deep network of a realistic cortical neuron as a tool to approximate the implications of the output of the neuron.
arXiv Detail & Related papers (2023-09-26T17:32:08Z) - Neuronal Cell Type Classification using Deep Learning [3.3517146652431378]
Recent developments in machine learning have provided advanced abilities for classifying neurons.
This paper aims to provide a robust and explainable deep-learning framework to classify neurons based on their electrophysiological activity.
arXiv Detail & Related papers (2023-06-01T10:28:49Z) - Emergent Modularity in Pre-trained Transformers [127.08792763817496]
We consider two main characteristics of modularity: functional specialization of neurons and function-based neuron grouping.
We study how modularity emerges during pre-training, and find that the modular structure is stabilized at the early stage.
It suggests that Transformers first construct the modular structure and then learn fine-grained neuron functions.
arXiv Detail & Related papers (2023-05-28T11:02:32Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Logical Information Cells I [10.411800812671952]
In this study we explore the spontaneous apparition of visible intelligible reasoning in simple artificial networks.
We start with the reproduction of a DNN model of natural neurons in monkeys.
We then study a bit more complex tasks, a priori involving predicate logic.
arXiv Detail & Related papers (2021-08-10T15:31:26Z) - Explanatory models in neuroscience: Part 2 -- constraint-based
intelligibility [8.477619837043214]
Computational modeling plays an increasingly important role in neuroscience, highlighting the philosophical question of how models explain.
In biological systems, many of these dependencies are naturally "top-down"
We show how the optimization techniques used to construct NN models capture some key aspects of these dependencies.
arXiv Detail & Related papers (2021-04-03T22:14:01Z) - The distribution of inhibitory neurons in the C. elegans connectome
facilitates self-optimization of coordinated neural activity [78.15296214629433]
The nervous system of the nematode Caenorhabditis elegans exhibits remarkable complexity despite the worm's small size.
A general challenge is to better understand the relationship between neural organization and neural activity at the system level.
We implemented an abstract simulation model of the C. elegans connectome that approximates the neurotransmitter identity of each neuron.
arXiv Detail & Related papers (2020-10-28T23:11:37Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.