Federated Learning with Spiking Neural Networks
- URL: http://arxiv.org/abs/2106.06579v1
- Date: Fri, 11 Jun 2021 19:00:58 GMT
- Title: Federated Learning with Spiking Neural Networks
- Authors: Yeshwanth Venkatesha, Youngeun Kim, Leandros Tassiulas, Priyadarshini
Panda
- Abstract summary: Spiking Neural Networks (SNNs) are emerging as an energy-efficient alternative to the traditional Artificial Neural Networks (ANNs)
We propose a federated learning framework for decentralized and privacy-preserving training of SNNs.
We observe that SNNs outperform ANNs in terms of overall accuracy by over 15% when the data is distributed across a large number of clients in the federation.
- Score: 13.09613811272936
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As neural networks get widespread adoption in resource-constrained embedded
devices, there is a growing need for low-power neural systems. Spiking Neural
Networks (SNNs)are emerging to be an energy-efficient alternative to the
traditional Artificial Neural Networks (ANNs) which are known to be
computationally intensive. From an application perspective, as federated
learning involves multiple energy-constrained devices, there is a huge scope to
leverage energy efficiency provided by SNNs. Despite its importance, there has
been little attention on training SNNs on a large-scale distributed system like
federated learning. In this paper, we bring SNNs to a more realistic federated
learning scenario. Specifically, we propose a federated learning framework for
decentralized and privacy-preserving training of SNNs. To validate the proposed
federated learning framework, we experimentally evaluate the advantages of SNNs
on various aspects of federated learning with CIFAR10 and CIFAR100 benchmarks.
We observe that SNNs outperform ANNs in terms of overall accuracy by over 15%
when the data is distributed across a large number of clients in the federation
while providing up to5.3x energy efficiency. In addition to efficiency, we also
analyze the sensitivity of the proposed federated SNN framework to data
distribution among the clients, stragglers, and gradient noise and perform a
comprehensive comparison with ANNs.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - Rethinking Residual Connection in Training Large-Scale Spiking Neural
Networks [10.286425749417216]
Spiking Neural Network (SNN) is known as the most famous brain-inspired model.
Non-differentiable spiking mechanism makes it hard to train large-scale SNNs.
arXiv Detail & Related papers (2023-11-09T06:48:29Z) - LC-TTFS: Towards Lossless Network Conversion for Spiking Neural Networks
with TTFS Coding [55.64533786293656]
We show that our algorithm can achieve a near-perfect mapping between the activation values of an ANN and the spike times of an SNN on a number of challenging AI tasks.
The study paves the way for deploying ultra-low-power TTFS-based SNNs on power-constrained edge computing platforms.
arXiv Detail & Related papers (2023-10-23T14:26:16Z) - Joint A-SNN: Joint Training of Artificial and Spiking Neural Networks
via Self-Distillation and Weight Factorization [12.1610509770913]
Spiking Neural Networks (SNNs) mimic the spiking nature of brain neurons.
We propose a joint training framework of ANN and SNN, in which the ANN can guide the SNN's optimization.
Our method consistently outperforms many other state-of-the-art training methods.
arXiv Detail & Related papers (2023-05-03T13:12:17Z) - LaSNN: Layer-wise ANN-to-SNN Distillation for Effective and Efficient
Training in Deep Spiking Neural Networks [7.0691139514420005]
Spiking Neural Networks (SNNs) are biologically realistic and practically promising in low-power because of their event-driven mechanism.
A conversion scheme is proposed to obtain competitive accuracy by mapping trained ANNs' parameters to SNNs with the same structures.
A novel SNN training framework is proposed, namely layer-wise ANN-to-SNN knowledge distillation (LaSNN)
arXiv Detail & Related papers (2023-04-17T03:49:35Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Knowledge Enhanced Neural Networks for relational domains [83.9217787335878]
We focus on a specific method, KENN, a Neural-Symbolic architecture that injects prior logical knowledge into a neural network.
In this paper, we propose an extension of KENN for relational data.
arXiv Detail & Related papers (2022-05-31T13:00:34Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Spiking neural networks trained via proxy [0.696125353550498]
We propose a new learning algorithm to train spiking neural networks (SNN) using conventional artificial neural networks (ANN) as proxy.
We couple two SNN and ANN networks, respectively, made of integrate-and-fire (IF) and ReLU neurons with the same network architectures and shared synaptic weights.
By assuming IF neuron with rate-coding as an approximation of ReLU, we backpropagate the error of the SNN in the proxy ANN to update the shared weights, simply by replacing the ANN final output with that of the SNN.
arXiv Detail & Related papers (2021-09-27T17:29:51Z) - Explore the Knowledge contained in Network Weights to Obtain Sparse
Neural Networks [2.649890751459017]
This paper proposes a novel learning approach to obtain sparse fully connected layers in neural networks (NNs) automatically.
We design a switcher neural network (SNN) to optimize the structure of the task neural network (TNN)
arXiv Detail & Related papers (2021-03-26T11:29:40Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.