Probing Biological and Artificial Neural Networks with Task-dependent
Neural Manifolds
- URL: http://arxiv.org/abs/2312.14285v1
- Date: Thu, 21 Dec 2023 20:40:51 GMT
- Title: Probing Biological and Artificial Neural Networks with Task-dependent
Neural Manifolds
- Authors: Michael Kuoch, Chi-Ning Chou, Nikhil Parthasarathy, Joel Dapello,
James J. DiCarlo, Haim Sompolinsky, SueYeon Chung
- Abstract summary: We investigate the internal mechanisms of neural networks through the lens of neural population geometry.
We quantitatively characterize how different learning objectives lead to differences in the organizational strategies of these models.
These analyses present a strong direction for bridging mechanistic and normative theories in neural networks through neural population geometry.
- Score: 12.037840490243603
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Recently, growth in our understanding of the computations performed in both
biological and artificial neural networks has largely been driven by either
low-level mechanistic studies or global normative approaches. However, concrete
methodologies for bridging the gap between these levels of abstraction remain
elusive. In this work, we investigate the internal mechanisms of neural
networks through the lens of neural population geometry, aiming to provide
understanding at an intermediate level of abstraction, as a way to bridge that
gap. Utilizing manifold capacity theory (MCT) from statistical physics and
manifold alignment analysis (MAA) from high-dimensional statistics, we probe
the underlying organization of task-dependent manifolds in deep neural networks
and macaque neural recordings. Specifically, we quantitatively characterize how
different learning objectives lead to differences in the organizational
strategies of these models and demonstrate how these geometric analyses are
connected to the decodability of task-relevant information. These analyses
present a strong direction for bridging mechanistic and normative theories in
neural networks through neural population geometry, potentially opening up many
future research avenues in both machine learning and neuroscience.
Related papers
- Statistical tuning of artificial neural network [0.0]
This study introduces methods to enhance the understanding of neural networks, focusing specifically on models with a single hidden layer.
We propose statistical tests to assess the significance of input neurons and introduce algorithms for dimensionality reduction.
This research advances the field of Explainable Artificial Intelligence by presenting robust statistical frameworks for interpreting neural networks.
arXiv Detail & Related papers (2024-09-24T19:47:03Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Spatiotemporal Patterns in Neurobiology: An Overview for Future
Artificial Intelligence [0.0]
We argue that computational models are key tools for elucidating possible functionalities that emerge from network interactions.
Here we review several classes of models including spiking neurons, integrate and fire neurons.
We hope these studies will inform future developments in artificial intelligence algorithms as well as help validate our understanding of brain processes.
arXiv Detail & Related papers (2022-03-29T10:28:01Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Generalized Shape Metrics on Neural Representations [26.78835065137714]
We provide a family of metric spaces that quantify representational dissimilarity.
We modify existing representational similarity measures based on canonical correlation analysis to satisfy the triangle inequality.
We identify relationships between neural representations that are interpretable in terms of anatomical features and model performance.
arXiv Detail & Related papers (2021-10-27T19:48:55Z) - Neuron-level Interpretation of Deep NLP Models: A Survey [22.035813865470956]
A plethora of research has been carried out to analyze and understand components of the deep neural network models.
Recent work has concentrated on interpretability at a more granular level, analyzing neurons and groups of neurons in large models.
arXiv Detail & Related papers (2021-08-30T11:54:21Z) - What can linearized neural networks actually say about generalization? [67.83999394554621]
In certain infinitely-wide neural networks, the neural tangent kernel (NTK) theory fully characterizes generalization.
We show that the linear approximations can indeed rank the learning complexity of certain tasks for neural networks.
Our work provides concrete examples of novel deep learning phenomena which can inspire future theoretical research.
arXiv Detail & Related papers (2021-06-12T13:05:11Z) - Neural population geometry: An approach for understanding biological and
artificial neural networks [3.4809730725241605]
We review examples of geometrical approaches providing insight into the function of biological and artificial neural networks.
Neural population geometry has the potential to unify our understanding of structure and function in biological and artificial neural networks.
arXiv Detail & Related papers (2021-04-14T18:10:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.