The Robustness of Spiking Neural Networks in Communication and its Application towards Network Efficiency in Federated Learning
- URL: http://arxiv.org/abs/2409.12769v1
- Date: Thu, 19 Sep 2024 13:37:18 GMT
- Title: The Robustness of Spiking Neural Networks in Communication and its Application towards Network Efficiency in Federated Learning
- Authors: Manh V. Nguyen, Liang Zhao, Bobin Deng, William Severa, Honghui Xu, Shaoen Wu,
- Abstract summary: Spiking Neural Networks (SNNs) have recently gained significant interest in on-chip learning in embedded devices.
In this paper, we explore the inherent robustness of SNNs under noisy communication in Federated Learning.
We propose a novel Federated Learning with TopK Sparsification algorithm to reduce the bandwidth usage for FL training.
- Score: 6.9569682335746235
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Spiking Neural Networks (SNNs) have recently gained significant interest in on-chip learning in embedded devices and emerged as an energy-efficient alternative to conventional Artificial Neural Networks (ANNs). However, to extend SNNs to a Federated Learning (FL) setting involving collaborative model training, the communication between the local devices and the remote server remains the bottleneck, which is often restricted and costly. In this paper, we first explore the inherent robustness of SNNs under noisy communication in FL. Building upon this foundation, we propose a novel Federated Learning with Top-K Sparsification (FLTS) algorithm to reduce the bandwidth usage for FL training. We discover that the proposed scheme with SNNs allows more bandwidth savings compared to ANNs without impacting the model's accuracy. Additionally, the number of parameters to be communicated can be reduced to as low as 6 percent of the size of the original model. We further improve the communication efficiency by enabling dynamic parameter compression during model training. Extensive experiment results demonstrate that our proposed algorithms significantly outperform the baselines in terms of communication cost and model accuracy and are promising for practical network-efficient FL with SNNs.
Related papers
- Training-free Conversion of Pretrained ANNs to SNNs for Low-Power and High-Performance Applications [23.502136316777058]
Spiking Neural Networks (SNNs) have emerged as a promising substitute for Artificial Neural Networks (ANNs)
Existing supervised learning algorithms for SNNs require significantly more memory and time than their ANN counterparts.
Our approach directly converts pre-trained ANN models into high-performance SNNs without additional training.
arXiv Detail & Related papers (2024-09-05T09:14:44Z) - LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - Skip Connections in Spiking Neural Networks: An Analysis of Their Effect
on Network Training [0.8602553195689513]
Spiking neural networks (SNNs) have gained attention as a promising alternative to traditional artificial neural networks (ANNs)
In this paper, we study the impact of skip connections on SNNs and propose a hyper parameter optimization technique that adapts models from ANN to SNN.
We demonstrate that optimizing the position, type, and number of skip connections can significantly improve the accuracy and efficiency of SNNs.
arXiv Detail & Related papers (2023-03-23T07:57:32Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Joint Superposition Coding and Training for Federated Learning over
Multi-Width Neural Networks [52.93232352968347]
This paper aims to integrate two synergetic technologies, federated learning (FL) and width-adjustable slimmable neural network (SNN)
FL preserves data privacy by exchanging the locally trained models of mobile devices. SNNs are however non-trivial, particularly under wireless connections with time-varying channel conditions.
We propose a communication and energy-efficient SNN-based FL (named SlimFL) that jointly utilizes superposition coding (SC) for global model aggregation and superposition training (ST) for updating local models.
arXiv Detail & Related papers (2021-12-05T11:17:17Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Communication-Efficient Federated Learning with Binary Neural Networks [15.614120327271557]
Federated learning (FL) is a privacy-preserving machine learning setting.
FL involves a frequent exchange of the parameters between all the clients and the server that coordinates the training.
In this paper, we consider training the binary neural networks (BNN) in the FL setting instead of the typical real-valued neural networks.
arXiv Detail & Related papers (2021-10-05T15:59:49Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Distilling Spikes: Knowledge Distillation in Spiking Neural Networks [22.331135708302586]
Spiking Neural Networks (SNNs) are energy-efficient computing architectures that exchange spikes for processing information.
We propose techniques for knowledge distillation in spiking neural networks for the task of image classification.
Our approach is expected to open up new avenues for deploying high performing large SNN models on resource-constrained hardware platforms.
arXiv Detail & Related papers (2020-05-01T09:36:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.