Spiking Neural Networks in Vertical Federated Learning: Performance Trade-offs
- URL: http://arxiv.org/abs/2407.17672v2
- Date: Tue, 13 Aug 2024 22:46:55 GMT
- Title: Spiking Neural Networks in Vertical Federated Learning: Performance Trade-offs
- Authors: Maryam Abbasihafshejani, Anindya Maiti, Murtuza Jadliwala,
- Abstract summary: Federated machine learning enables model training across multiple clients.
Vertical Federated Learning (VFL) deals with instances where the clients have different feature sets of the same samples.
Spiking Neural Networks (SNNs) are being leveraged to enable fast and accurate processing at the edge.
- Score: 2.1756721838833797
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Federated machine learning enables model training across multiple clients while maintaining data privacy. Vertical Federated Learning (VFL) specifically deals with instances where the clients have different feature sets of the same samples. As federated learning models aim to improve efficiency and adaptability, innovative neural network architectures like Spiking Neural Networks (SNNs) are being leveraged to enable fast and accurate processing at the edge. SNNs, known for their efficiency over Artificial Neural Networks (ANNs), have not been analyzed for their applicability in VFL, thus far. In this paper, we investigate the benefits and trade-offs of using SNN models in a vertical federated learning setting. We implement two different federated learning architectures -- with model splitting and without model splitting -- that have different privacy and performance implications. We evaluate the setup using CIFAR-10 and CIFAR-100 benchmark datasets along with SNN implementations of VGG9 and ResNET classification models. Comparative evaluations demonstrate that the accuracy of SNN models is comparable to that of traditional ANNs for VFL applications, albeit significantly more energy efficient.
Related papers
- The Robustness of Spiking Neural Networks in Communication and its Application towards Network Efficiency in Federated Learning [6.9569682335746235]
Spiking Neural Networks (SNNs) have recently gained significant interest in on-chip learning in embedded devices.
In this paper, we explore the inherent robustness of SNNs under noisy communication in Federated Learning.
We propose a novel Federated Learning with TopK Sparsification algorithm to reduce the bandwidth usage for FL training.
arXiv Detail & Related papers (2024-09-19T13:37:18Z) - On Learnable Parameters of Optimal and Suboptimal Deep Learning Models [2.889799048595314]
We study the structural and operational aspects of deep learning models.
Our research focuses on the nuances of learnable parameters (weight) statistics, distribution, node interaction, and visualization.
arXiv Detail & Related papers (2024-08-21T15:50:37Z) - Heterogeneous Federated Learning with Convolutional and Spiking Neural Networks [17.210940028586588]
Federated learning (FL) has emerged as a promising paradigm for training models on decentralized data.
This work benchmarks FL systems containing both convoluntional neural networks (CNNs) and biologically more plausible spiking neural networks (SNNs)
Experimental results demonstrate that the CNN-SNN fusion framework exhibits the best performance.
arXiv Detail & Related papers (2024-06-14T03:05:05Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Batch-Ensemble Stochastic Neural Networks for Out-of-Distribution
Detection [55.028065567756066]
Out-of-distribution (OOD) detection has recently received much attention from the machine learning community due to its importance in deploying machine learning models in real-world applications.
In this paper we propose an uncertainty quantification approach by modelling the distribution of features.
We incorporate an efficient ensemble mechanism, namely batch-ensemble, to construct the batch-ensemble neural networks (BE-SNNs) and overcome the feature collapse problem.
We show that BE-SNNs yield superior performance on several OOD benchmarks, such as the Two-Moons dataset, the FashionMNIST vs MNIST dataset, FashionM
arXiv Detail & Related papers (2022-06-26T16:00:22Z) - Training High-Performance Low-Latency Spiking Neural Networks by
Differentiation on Spike Representation [70.75043144299168]
Spiking Neural Network (SNN) is a promising energy-efficient AI model when implemented on neuromorphic hardware.
It is a challenge to efficiently train SNNs due to their non-differentiability.
We propose the Differentiation on Spike Representation (DSR) method, which could achieve high performance.
arXiv Detail & Related papers (2022-05-01T12:44:49Z) - Federated Learning with Spiking Neural Networks [13.09613811272936]
Spiking Neural Networks (SNNs) are emerging as an energy-efficient alternative to the traditional Artificial Neural Networks (ANNs)
We propose a federated learning framework for decentralized and privacy-preserving training of SNNs.
We observe that SNNs outperform ANNs in terms of overall accuracy by over 15% when the data is distributed across a large number of clients in the federation.
arXiv Detail & Related papers (2021-06-11T19:00:58Z) - ANNETTE: Accurate Neural Network Execution Time Estimation with Stacked
Models [56.21470608621633]
We propose a time estimation framework to decouple the architectural search from the target hardware.
The proposed methodology extracts a set of models from micro- kernel and multi-layer benchmarks and generates a stacked model for mapping and network execution time estimation.
We compare estimation accuracy and fidelity of the generated mixed models, statistical models with the roofline model, and a refined roofline model for evaluation.
arXiv Detail & Related papers (2021-05-07T11:39:05Z) - Neural Architecture Search For LF-MMI Trained Time Delay Neural Networks [61.76338096980383]
A range of neural architecture search (NAS) techniques are used to automatically learn two types of hyper- parameters of state-of-the-art factored time delay neural networks (TDNNs)
These include the DARTS method integrating architecture selection with lattice-free MMI (LF-MMI) TDNN training.
Experiments conducted on a 300-hour Switchboard corpus suggest the auto-configured systems consistently outperform the baseline LF-MMI TDNN systems.
arXiv Detail & Related papers (2020-07-17T08:32:11Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Distilling Spikes: Knowledge Distillation in Spiking Neural Networks [22.331135708302586]
Spiking Neural Networks (SNNs) are energy-efficient computing architectures that exchange spikes for processing information.
We propose techniques for knowledge distillation in spiking neural networks for the task of image classification.
Our approach is expected to open up new avenues for deploying high performing large SNN models on resource-constrained hardware platforms.
arXiv Detail & Related papers (2020-05-01T09:36:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.