Mitigating Communication Costs in Neural Networks: The Role of Dendritic
Nonlinearity
- URL: http://arxiv.org/abs/2306.11950v1
- Date: Wed, 21 Jun 2023 00:28:20 GMT
- Title: Mitigating Communication Costs in Neural Networks: The Role of Dendritic
Nonlinearity
- Authors: Xundong Wu, Pengfei Zhao, Zilin Yu, Lei Ma, Ka-Wa Yip, Huajin Tang,
Gang Pan, Tiejun Huang
- Abstract summary: In this study, we scrutinized the importance of nonlinear dendrites within neural networks.
Our findings reveal that integrating dendritic structures can substantially enhance model capacity and performance.
- Score: 28.243134476634125
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Our comprehension of biological neuronal networks has profoundly influenced
the evolution of artificial neural networks (ANNs). However, the neurons
employed in ANNs exhibit remarkable deviations from their biological analogs,
mainly due to the absence of complex dendritic trees encompassing local
nonlinearity. Despite such disparities, previous investigations have
demonstrated that point neurons can functionally substitute dendritic neurons
in executing computational tasks. In this study, we scrutinized the importance
of nonlinear dendrites within neural networks. By employing machine-learning
methodologies, we assessed the impact of dendritic structure nonlinearity on
neural network performance. Our findings reveal that integrating dendritic
structures can substantially enhance model capacity and performance while
keeping signal communication costs effectively restrained. This investigation
offers pivotal insights that hold considerable implications for the development
of future neural network accelerators.
Related papers
- Contrastive Learning in Memristor-based Neuromorphic Systems [55.11642177631929]
Spiking neural networks have become an important family of neuron-based models that sidestep many of the key limitations facing modern-day backpropagation-trained deep networks.
In this work, we design and investigate a proof-of-concept instantiation of contrastive-signal-dependent plasticity (CSDP), a neuromorphic form of forward-forward-based, backpropagation-free learning.
arXiv Detail & Related papers (2024-09-17T04:48:45Z) - Exploiting Heterogeneity in Timescales for Sparse Recurrent Spiking Neural Networks for Energy-Efficient Edge Computing [16.60622265961373]
Spiking Neural Networks (SNNs) represent the forefront of neuromorphic computing.
This paper weaves together three groundbreaking studies that revolutionize SNN performance.
arXiv Detail & Related papers (2024-07-08T23:33:12Z) - Enhancing learning in spiking neural networks through neuronal heterogeneity and neuromodulatory signaling [52.06722364186432]
We propose a biologically-informed framework for enhancing artificial neural networks (ANNs)
Our proposed dual-framework approach highlights the potential of spiking neural networks (SNNs) for emulating diverse spiking behaviors.
We outline how the proposed approach integrates brain-inspired compartmental models and task-driven SNNs, bioinspiration and complexity.
arXiv Detail & Related papers (2024-07-05T14:11:28Z) - Addressing caveats of neural persistence with deep graph persistence [54.424983583720675]
We find that the variance of network weights and spatial concentration of large weights are the main factors that impact neural persistence.
We propose an extension of the filtration underlying neural persistence to the whole neural network instead of single layers.
This yields our deep graph persistence measure, which implicitly incorporates persistent paths through the network and alleviates variance-related issues.
arXiv Detail & Related papers (2023-07-20T13:34:11Z) - Dive into the Power of Neuronal Heterogeneity [8.6837371869842]
We show the challenges faced by backpropagation-based methods in optimizing Spiking Neural Networks (SNNs) and achieve more robust optimization of heterogeneous neurons in random networks using an Evolutionary Strategy (ES)
We find that membrane time constants play a crucial role in neural heterogeneity, and their distribution is similar to that observed in biological experiments.
arXiv Detail & Related papers (2023-05-19T07:32:29Z) - Connected Hidden Neurons (CHNNet): An Artificial Neural Network for
Rapid Convergence [0.6218519716921521]
We propose a more robust model of artificial neural networks where the hidden neurons, residing in the same hidden layer, are interconnected that leads to rapid convergence.
With the experimental study of our proposed model in deep networks, we demonstrate that the model results in a noticeable increase in convergence rate compared to the conventional feed-forward neural network.
arXiv Detail & Related papers (2023-05-17T14:00:38Z) - Contrastive-Signal-Dependent Plasticity: Self-Supervised Learning in Spiking Neural Circuits [61.94533459151743]
This work addresses the challenge of designing neurobiologically-motivated schemes for adjusting the synapses of spiking networks.
Our experimental simulations demonstrate a consistent advantage over other biologically-plausible approaches when training recurrent spiking networks.
arXiv Detail & Related papers (2023-03-30T02:40:28Z) - Constraints on the design of neuromorphic circuits set by the properties
of neural population codes [61.15277741147157]
In the brain, information is encoded, transmitted and used to inform behaviour.
Neuromorphic circuits need to encode information in a way compatible to that used by populations of neuron in the brain.
arXiv Detail & Related papers (2022-12-08T15:16:04Z) - Spiking neural network for nonlinear regression [68.8204255655161]
Spiking neural networks carry the potential for a massive reduction in memory and energy consumption.
They introduce temporal and neuronal sparsity, which can be exploited by next-generation neuromorphic hardware.
A framework for regression using spiking neural networks is proposed.
arXiv Detail & Related papers (2022-10-06T13:04:45Z) - Improving Spiking Neural Network Accuracy Using Time-based Neurons [0.24366811507669117]
Research on neuromorphic computing systems based on low-power spiking neural networks using analog neurons is in the spotlight.
As technology scales down, analog neurons are difficult to scale, and they suffer from reduced voltage headroom/dynamic range and circuit nonlinearities.
This paper first models the nonlinear behavior of existing current-mirror-based voltage-domain neurons designed in a 28nm process, and show SNN inference accuracy can be severely degraded by the effect of neuron's nonlinearity.
We propose a novel neuron, which processes incoming spikes in the time domain and greatly improves the linearity, thereby improving the inference accuracy compared to the
arXiv Detail & Related papers (2022-01-05T00:24:45Z) - Under the Hood of Neural Networks: Characterizing Learned
Representations by Functional Neuron Populations and Network Ablations [0.3441021278275805]
We shed light on the roles of single neurons and groups of neurons within the network fulfilling a learned task.
We find that neither a neuron's magnitude or selectivity of activation, nor its impact on network performance are sufficient stand-alone indicators.
arXiv Detail & Related papers (2020-04-02T20:45:01Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.