Understanding Information Processing in Human Brain by Interpreting
Machine Learning Models
- URL: http://arxiv.org/abs/2010.08715v1
- Date: Sat, 17 Oct 2020 04:37:26 GMT
- Title: Understanding Information Processing in Human Brain by Interpreting
Machine Learning Models
- Authors: Ilya Kuzovkin
- Abstract summary: The thesis explores the role machine learning methods play in creating intuitive computational models of neural processing.
This perspective makes the case in favor of the larger role that exploratory and data-driven approach to computational neuroscience could play.
- Score: 1.14219428942199
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The thesis explores the role machine learning methods play in creating
intuitive computational models of neural processing. Combined with
interpretability techniques, machine learning could replace human modeler and
shift the focus of human effort to extracting the knowledge from the ready-made
models and articulating that knowledge into intuitive descroptions of reality.
This perspective makes the case in favor of the larger role that exploratory
and data-driven approach to computational neuroscience could play while
coexisting alongside the traditional hypothesis-driven approach.
We exemplify the proposed approach in the context of the knowledge
representation taxonomy with three research projects that employ
interpretability techniques on top of machine learning methods at three
different levels of neural organization. The first study (Chapter 3) explores
feature importance analysis of a random forest decoder trained on intracerebral
recordings from 100 human subjects to identify spectrotemporal signatures that
characterize local neural activity during the task of visual categorization.
The second study (Chapter 4) employs representation similarity analysis to
compare the neural responses of the areas along the ventral stream with the
activations of the layers of a deep convolutional neural network. The third
study (Chapter 5) proposes a method that allows test subjects to visually
explore the state representation of their neural signal in real time. This is
achieved by using a topology-preserving dimensionality reduction technique that
allows to transform the neural data from the multidimensional representation
used by the computer into a two-dimensional representation a human can grasp.
The approach, the taxonomy, and the examples, present a strong case for the
applicability of machine learning methods to automatic knowledge discovery in
neuroscience.
Related papers
- Neural timescales from a computational perspective [5.390514665166601]
Timescales of neural activity are diverse across and within brain areas, and experimental observations suggest that neural timescales reflect information in dynamic environments.
Here, we take a complementary perspective and synthesize three directions where computational methods can distill the broad set of empirical observations into quantitative and testable theories.
arXiv Detail & Related papers (2024-09-04T13:16:20Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Interpretable statistical representations of neural population dynamics and geometry [4.459704414303749]
We introduce a representation learning method, MARBLE, that decomposes on-manifold dynamics into local flow fields and maps them into a common latent space.
In simulated non-linear dynamical systems, recurrent neural networks, and experimental single-neuron recordings from primates and rodents, we discover emergent low-dimensional latent representations.
These representations are consistent across neural networks and animals, enabling the robust comparison of cognitive computations.
arXiv Detail & Related papers (2023-04-06T21:11:04Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Interpretability of Neural Network With Physiological Mechanisms [5.1971653175509145]
Deep learning continues to play as a powerful state-of-art technique that has achieved extraordinary accuracy levels in various domains of regression and classification tasks.
The original goal of proposing the neural network model is to improve the understanding of complex human brains using a mathematical expression approach.
Recent deep learning techniques continue to lose the interpretations of its functional process by being treated mostly as a black-box approximator.
arXiv Detail & Related papers (2022-03-24T21:40:04Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Neuronal Learning Analysis using Cycle-Consistent Adversarial Networks [4.874780144224057]
We use a variant of deep generative models called - CycleGAN, to learn the unknown mapping between pre- and post-learning neural activities.
We develop an end-to-end pipeline to preprocess, train and evaluate calcium fluorescence signals, and a procedure to interpret the resulting deep learning models.
arXiv Detail & Related papers (2021-11-25T13:24:19Z) - Neural Fields in Visual Computing and Beyond [54.950885364735804]
Recent advances in machine learning have created increasing interest in solving visual computing problems using coordinate-based neural networks.
neural fields have seen successful application in the synthesis of 3D shapes and image, animation of human bodies, 3D reconstruction, and pose estimation.
This report provides context, mathematical grounding, and an extensive review of literature on neural fields.
arXiv Detail & Related papers (2021-11-22T18:57:51Z) - Information theoretic analysis of computational models as a tool to
understand the neural basis of behaviors [0.0]
One of the greatest research challenges of this century is to understand the neural basis for how behavior emerges in brain-body-environment systems.
Computational models provide an alternative framework within which one can study model systems.
I provide an introduction, a review and discussion to make a case for how information theoretic analysis of computational models is a potent research methodology.
arXiv Detail & Related papers (2021-06-02T02:08:18Z) - A Developmental Neuro-Robotics Approach for Boosting the Recognition of
Handwritten Digits [91.3755431537592]
Recent evidence shows that a simulation of the children's embodied strategies can improve the machine intelligence too.
This article explores the application of embodied strategies to convolutional neural network models in the context of developmental neuro-robotics.
arXiv Detail & Related papers (2020-03-23T14:55:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.