Testing the Tools of Systems Neuroscience on Artificial Neural Networks
- URL: http://arxiv.org/abs/2202.07035v1
- Date: Mon, 14 Feb 2022 20:55:26 GMT
- Title: Testing the Tools of Systems Neuroscience on Artificial Neural Networks
- Authors: Grace W. Lindsay
- Abstract summary: I argue that these tools should be explicitly tested and that artificial neural networks (ANNs) are an appropriate testing grounds for them.
The recent resurgence of the use of ANNs as models of everything from perception to memory to motor control stems from a rough similarity between artificial and biological neural networks.
I provide here both a roadmap for performing this testing and a list of tools that are suitable to be tested on ANNs.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Neuroscientists apply a range of common analysis tools to recorded neural
activity in order to glean insights into how neural circuits implement
computations. Despite the fact that these tools shape the progress of the field
as a whole, we have little empirical evidence that they are effective at
quickly identifying the phenomena of interest. Here I argue that these tools
should be explicitly tested and that artificial neural networks (ANNs) are an
appropriate testing grounds for them. The recent resurgence of the use of ANNs
as models of everything from perception to memory to motor control stems from a
rough similarity between artificial and biological neural networks and the
ability to train these networks to perform complex high-dimensional tasks.
These properties, combined with the ability to perfectly observe and manipulate
these systems, makes them well-suited for vetting the tools of systems and
cognitive neuroscience. I provide here both a roadmap for performing this
testing and a list of tools that are suitable to be tested on ANNs. Using ANNs
to reflect on the extent to which these tools provide a productive
understanding of neural systems -- and on exactly what understanding should
mean here -- has the potential to expedite progress in the study of the brain.
Related papers
- Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - A Hybrid Neural Coding Approach for Pattern Recognition with Spiking
Neural Networks [53.31941519245432]
Brain-inspired spiking neural networks (SNNs) have demonstrated promising capabilities in solving pattern recognition tasks.
These SNNs are grounded on homogeneous neurons that utilize a uniform neural coding for information representation.
In this study, we argue that SNN architectures should be holistically designed to incorporate heterogeneous coding schemes.
arXiv Detail & Related papers (2023-05-26T02:52:12Z) - Brain-inspired learning in artificial neural networks: a review [5.064447369892274]
We review current brain-inspired learning representations in artificial neural networks.
We investigate the integration of more biologically plausible mechanisms, such as synaptic plasticity, to enhance these networks' capabilities.
arXiv Detail & Related papers (2023-05-18T18:34:29Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Efficient, probabilistic analysis of combinatorial neural codes [0.0]
neural networks encode inputs in the form of combinations of individual neurons' activities.
These neural codes present a computational challenge due to their high dimensionality and often large volumes of data.
We apply methods previously applied to small examples and apply them to large neural codes generated by experiments.
arXiv Detail & Related papers (2022-10-19T11:58:26Z) - Neuro-Symbolic Learning of Answer Set Programs from Raw Data [54.56905063752427]
Neuro-Symbolic AI aims to combine interpretability of symbolic techniques with the ability of deep learning to learn from raw data.
We introduce Neuro-Symbolic Inductive Learner (NSIL), an approach that trains a general neural network to extract latent concepts from raw data.
NSIL learns expressive knowledge, solves computationally complex problems, and achieves state-of-the-art performance in terms of accuracy and data efficiency.
arXiv Detail & Related papers (2022-05-25T12:41:59Z) - Probing artificial neural networks: insights from neuroscience [6.7832320606111125]
Neuroscience has paved the way in using such models through numerous studies conducted in recent decades.
We argue that specific research goals play a paramount role when designing a probe and encourage future probing studies to be explicit in stating these goals.
arXiv Detail & Related papers (2021-04-16T16:13:23Z) - Neuromorphic Processing and Sensing: Evolutionary Progression of AI to
Spiking [0.0]
Spiking Neural Network algorithms hold the promise to implement advanced artificial intelligence using a fraction of the computations and power requirements.
This paper explains the theoretical workings of neuromorphic technologies based on spikes, and overviews the state-of-art in hardware processors, software platforms and neuromorphic sensing devices.
A progression path is paved for current machine learning specialists to update their skillset, as well as classification or predictive models from the current generation of deep neural networks to SNNs.
arXiv Detail & Related papers (2020-07-10T20:54:42Z) - Spiking Neural Networks Hardware Implementations and Challenges: a
Survey [53.429871539789445]
Spiking Neural Networks are cognitive algorithms mimicking neuron and synapse operational principles.
We present the state of the art of hardware implementations of spiking neural networks.
We discuss the strategies employed to leverage the characteristics of these event-driven algorithms at the hardware level.
arXiv Detail & Related papers (2020-05-04T13:24:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.