Information theoretic analysis of computational models as a tool to
understand the neural basis of behaviors
- URL: http://arxiv.org/abs/2106.05186v1
- Date: Wed, 2 Jun 2021 02:08:18 GMT
- Title: Information theoretic analysis of computational models as a tool to
understand the neural basis of behaviors
- Authors: Madhavun Candadai
- Abstract summary: One of the greatest research challenges of this century is to understand the neural basis for how behavior emerges in brain-body-environment systems.
Computational models provide an alternative framework within which one can study model systems.
I provide an introduction, a review and discussion to make a case for how information theoretic analysis of computational models is a potent research methodology.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: One of the greatest research challenges of this century is to understand the
neural basis for how behavior emerges in brain-body-environment systems. To
this end, research has flourished along several directions but have
predominantly focused on the brain. While there is in an increasing acceptance
and focus on including the body and environment in studying the neural basis of
behavior, animal researchers are often limited by technology or tools.
Computational models provide an alternative framework within which one can
study model systems where ground-truth can be measured and interfered with.
These models act as a hypothesis generation framework that would in turn guide
experimentation. Furthermore, the ability to intervene as we please, allows us
to conduct in-depth analysis of these models in a way that cannot be performed
in natural systems. For this purpose, information theory is emerging as a
powerful tool that can provide insights into the operation of these
brain-body-environment models. In this work, I provide an introduction, a
review and discussion to make a case for how information theoretic analysis of
computational models is a potent research methodology to help us better
understand the neural basis of behavior.
Related papers
- Deep Latent Variable Modeling of Physiological Signals [0.8702432681310401]
We explore high-dimensional problems related to physiological monitoring using latent variable models.
First, we present a novel deep state-space model to generate electrical waveforms of the heart using optically obtained signals as inputs.
Second, we present a brain signal modeling scheme that combines the strengths of probabilistic graphical models and deep adversarial learning.
Third, we propose a framework for the joint modeling of physiological measures and behavior.
arXiv Detail & Related papers (2024-05-29T17:07:33Z) - A Review of Neuroscience-Inspired Machine Learning [58.72729525961739]
Bio-plausible credit assignment is compatible with practically any learning condition and is energy-efficient.
In this paper, we survey several vital algorithms that model bio-plausible rules of credit assignment in artificial neural networks.
We conclude by discussing the future challenges that will need to be addressed in order to make such algorithms more useful in practical applications.
arXiv Detail & Related papers (2024-02-16T18:05:09Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Causal machine learning for single-cell genomics [94.28105176231739]
We discuss the application of machine learning techniques to single-cell genomics and their challenges.
We first present the model that underlies most of current causal approaches to single-cell biology.
We then identify open problems in the application of causal approaches to single-cell data.
arXiv Detail & Related papers (2023-10-23T13:35:24Z) - A Neuro-mimetic Realization of the Common Model of Cognition via Hebbian
Learning and Free Energy Minimization [55.11642177631929]
Large neural generative models are capable of synthesizing semantically rich passages of text or producing complex images.
We discuss the COGnitive Neural GENerative system, such an architecture that casts the Common Model of Cognition.
arXiv Detail & Related papers (2023-10-14T23:28:48Z) - Predictive Coding and Stochastic Resonance: Towards a Unified Theory of
Auditory (Phantom) Perception [6.416574036611064]
To gain a mechanistic understanding of brain function, hypothesis driven experiments should be accompanied by biologically plausible computational models.
With a special focus on tinnitus, we review recent work at the intersection of artificial intelligence, psychology, and neuroscience.
We conclude that two fundamental processing principles - being ubiquitous in the brain - best fit to a vast number of experimental results.
arXiv Detail & Related papers (2022-04-07T10:47:58Z) - Spatiotemporal Patterns in Neurobiology: An Overview for Future
Artificial Intelligence [0.0]
We argue that computational models are key tools for elucidating possible functionalities that emerge from network interactions.
Here we review several classes of models including spiking neurons, integrate and fire neurons.
We hope these studies will inform future developments in artificial intelligence algorithms as well as help validate our understanding of brain processes.
arXiv Detail & Related papers (2022-03-29T10:28:01Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - On the Evolution of Neuron Communities in a Deep Learning Architecture [0.7106986689736827]
This paper examines the neuron activation patterns of deep learning-based classification models.
We show that both the community quality (modularity) and entropy are closely related to the deep learning models' performances.
arXiv Detail & Related papers (2021-06-08T21:09:55Z) - The Neural Coding Framework for Learning Generative Models [91.0357317238509]
We propose a novel neural generative model inspired by the theory of predictive processing in the brain.
In a similar way, artificial neurons in our generative model predict what neighboring neurons will do, and adjust their parameters based on how well the predictions matched reality.
arXiv Detail & Related papers (2020-12-07T01:20:38Z) - Understanding Information Processing in Human Brain by Interpreting
Machine Learning Models [1.14219428942199]
The thesis explores the role machine learning methods play in creating intuitive computational models of neural processing.
This perspective makes the case in favor of the larger role that exploratory and data-driven approach to computational neuroscience could play.
arXiv Detail & Related papers (2020-10-17T04:37:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.