A Goal-Driven Approach to Systems Neuroscience
- URL: http://arxiv.org/abs/2311.02704v1
- Date: Sun, 5 Nov 2023 16:37:53 GMT
- Title: A Goal-Driven Approach to Systems Neuroscience
- Authors: Aran Nayebi
- Abstract summary: Humans and animals exhibit a range of interesting behaviors in dynamic environments.
It is unclear how our brains actively reformat this dense sensory information to enable these behaviors.
We offer a new definition of interpretability that we show has promise in yielding unified structural and functional models of neural circuits.
- Score: 2.6451153531057985
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Humans and animals exhibit a range of interesting behaviors in dynamic
environments, and it is unclear how our brains actively reformat this dense
sensory information to enable these behaviors. Experimental neuroscience is
undergoing a revolution in its ability to record and manipulate hundreds to
thousands of neurons while an animal is performing a complex behavior. As these
paradigms enable unprecedented access to the brain, a natural question that
arises is how to distill these data into interpretable insights about how
neural circuits give rise to intelligent behaviors. The classical approach in
systems neuroscience has been to ascribe well-defined operations to individual
neurons and provide a description of how these operations combine to produce a
circuit-level theory of neural computations. While this approach has had some
success for small-scale recordings with simple stimuli, designed to probe a
particular circuit computation, often times these ultimately lead to disparate
descriptions of the same system across stimuli. Perhaps more strikingly, many
response profiles of neurons are difficult to succinctly describe in words,
suggesting that new approaches are needed in light of these experimental
observations. In this thesis, we offer a different definition of
interpretability that we show has promise in yielding unified structural and
functional models of neural circuits, and describes the evolutionary
constraints that give rise to the response properties of the neural population,
including those that have previously been difficult to describe individually.
We demonstrate the utility of this framework across multiple brain areas and
species to study the roles of recurrent processing in the primate ventral
visual pathway; mouse visual processing; heterogeneity in rodent medial
entorhinal cortex; and facilitating biological learning.
Related papers
- Artificial Kuramoto Oscillatory Neurons [65.16453738828672]
We introduce Artificial Kuramotoy Neurons (AKOrN) as a dynamical alternative to threshold units.
We show that this idea provides performance improvements across a wide spectrum of tasks.
We believe that these empirical results show the importance of our assumptions at the most basic neuronal level of neural representation.
arXiv Detail & Related papers (2024-10-17T17:47:54Z) - Learn to integrate parts for whole through correlated neural variability [8.173681663544757]
Sensory perception originates from the responses of sensory neurons, which react to a collection of sensory signals linked to physical attributes of a singular perceptual object.
Unraveling how the brain extracts perceptual information from these neuronal responses is a pivotal challenge in both computational neuroscience and machine learning.
We introduce a statistical mechanical theory, where perceptual information is first encoded in the correlated variability of sensory neurons and then reformatted into the firing rates of downstream neurons.
arXiv Detail & Related papers (2024-01-01T13:05:29Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Single Biological Neurons as Temporally Precise Spatio-Temporal Pattern
Recognizers [0.0]
thesis is focused on the central idea that single neurons in the brain should be regarded as temporally highly complex-temporal pattern recognizers.
In chapter 2 we demonstrate that single neurons can generate temporally precise output patterns in response to specific-temporal input patterns.
In chapter 3, we use the differentiable deep network of a realistic cortical neuron as a tool to approximate the implications of the output of the neuron.
arXiv Detail & Related papers (2023-09-26T17:32:08Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - A Neural Dynamic Model based on Activation Diffusion and a
Micro-Explanation for Cognitive Operations [4.416484585765028]
The neural mechanism of memory has a very close relation with the problem of representation in artificial intelligence.
A computational model was proposed to simulate the network of neurons in brain and how they process information.
arXiv Detail & Related papers (2020-11-27T01:34:08Z) - Compositional Explanations of Neurons [52.71742655312625]
We describe a procedure for explaining neurons in deep representations by identifying compositional logical concepts.
We use this procedure to answer several questions on interpretability in models for vision and natural language processing.
arXiv Detail & Related papers (2020-06-24T20:37:05Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.