Interpretability of Neural Network With Physiological Mechanisms
- URL: http://arxiv.org/abs/2203.13262v1
- Date: Thu, 24 Mar 2022 21:40:04 GMT
- Title: Interpretability of Neural Network With Physiological Mechanisms
- Authors: Anna Zou, Zhiyuan Li
- Abstract summary: Deep learning continues to play as a powerful state-of-art technique that has achieved extraordinary accuracy levels in various domains of regression and classification tasks.
The original goal of proposing the neural network model is to improve the understanding of complex human brains using a mathematical expression approach.
Recent deep learning techniques continue to lose the interpretations of its functional process by being treated mostly as a black-box approximator.
- Score: 5.1971653175509145
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning continues to play as a powerful state-of-art technique that has
achieved extraordinary accuracy levels in various domains of regression and
classification tasks, including images, video, signal, and natural language
data. The original goal of proposing the neural network model is to improve the
understanding of complex human brains using a mathematical expression approach.
However, recent deep learning techniques continue to lose the interpretations
of its functional process by being treated mostly as a black-box approximator.
To address this issue, such an AI model needs to be biological and
physiological realistic to incorporate a better understanding of human-machine
evolutionary intelligence. In this study, we compare neural networks and
biological circuits to discover the similarities and differences from various
perspective views. We further discuss the insights into how neural networks
learn from data by investigating human biological behaviors and understandable
justifications.
Related papers
- Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Hebbian Learning based Orthogonal Projection for Continual Learning of
Spiking Neural Networks [74.3099028063756]
We develop a new method with neuronal operations based on lateral connections and Hebbian learning.
We show that Hebbian and anti-Hebbian learning on recurrent lateral connections can effectively extract the principal subspace of neural activities.
Our method consistently solves for spiking neural networks with nearly zero forgetting.
arXiv Detail & Related papers (2024-02-19T09:29:37Z) - Brain-Inspired Machine Intelligence: A Survey of
Neurobiologically-Plausible Credit Assignment [65.268245109828]
We examine algorithms for conducting credit assignment in artificial neural networks that are inspired or motivated by neurobiology.
We organize the ever-growing set of brain-inspired learning schemes into six general families and consider these in the context of backpropagation of errors.
The results of this review are meant to encourage future developments in neuro-mimetic systems and their constituent learning processes.
arXiv Detail & Related papers (2023-12-01T05:20:57Z) - Advanced Computing and Related Applications Leveraging Brain-inspired
Spiking Neural Networks [0.0]
Spiking neural network is one of the cores of artificial intelligence which realizes brain-like computing.
This paper summarizes the strengths, weaknesses and applicability of five neuronal models and analyzes the characteristics of five network topologies.
arXiv Detail & Related papers (2023-09-08T16:41:08Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Deep Learning Models to Study Sentence Comprehension in the Human Brain [0.1503974529275767]
Recent artificial neural networks that process natural language achieve unprecedented performance in tasks requiring sentence-level understanding.
We review works that compare these artificial language models with human brain activity and we assess the extent to which this approach has improved our understanding of the neural processes involved in natural language comprehension.
arXiv Detail & Related papers (2023-01-16T10:31:25Z) - Brain-inspired Graph Spiking Neural Networks for Commonsense Knowledge
Representation and Reasoning [11.048601659933249]
How neural networks in the human brain represent commonsense knowledge is an important research topic in neuroscience, cognitive science, psychology, and artificial intelligence.
This work investigates how population encoding and spiking timing-dependent plasticity (STDP) mechanisms can be integrated into the learning of spiking neural networks.
The neuron populations of different communities together constitute the entire commonsense knowledge graph, forming a giant graph spiking neural network.
arXiv Detail & Related papers (2022-07-11T05:22:38Z) - An Introductory Review of Spiking Neural Network and Artificial Neural
Network: From Biological Intelligence to Artificial Intelligence [4.697611383288171]
A kind of spiking neural network with biological interpretability is gradually receiving wide attention.
This review hopes to attract different researchers and advance the development of brain-inspired intelligence and artificial intelligence.
arXiv Detail & Related papers (2022-04-09T09:34:34Z) - Data-driven emergence of convolutional structure in neural networks [83.4920717252233]
We show how fully-connected neural networks solving a discrimination task can learn a convolutional structure directly from their inputs.
By carefully designing data models, we show that the emergence of this pattern is triggered by the non-Gaussian, higher-order local structure of the inputs.
arXiv Detail & Related papers (2022-02-01T17:11:13Z) - Overcoming the Domain Gap in Contrastive Learning of Neural Action
Representations [60.47807856873544]
A fundamental goal in neuroscience is to understand the relationship between neural activity and behavior.
We generated a new multimodal dataset consisting of the spontaneous behaviors generated by fruit flies.
This dataset and our new set of augmentations promise to accelerate the application of self-supervised learning methods in neuroscience.
arXiv Detail & Related papers (2021-11-29T15:27:51Z) - Neural population geometry: An approach for understanding biological and
artificial neural networks [3.4809730725241605]
We review examples of geometrical approaches providing insight into the function of biological and artificial neural networks.
Neural population geometry has the potential to unify our understanding of structure and function in biological and artificial neural networks.
arXiv Detail & Related papers (2021-04-14T18:10:34Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.