A Homomorphic Encryption Framework for Privacy-Preserving Spiking Neural
Networks
- URL: http://arxiv.org/abs/2308.05636v2
- Date: Thu, 12 Oct 2023 11:49:58 GMT
- Title: A Homomorphic Encryption Framework for Privacy-Preserving Spiking Neural
Networks
- Authors: Farzad Nikfam, Raffaele Casaburi, Alberto Marchisio, Maurizio Martina
and Muhammad Shafique
- Abstract summary: Spiking Neural Networks (SNNs) mimic the behavior of the human brain to improve efficiency and reduce energy consumption.
Homomorphic encryption (HE) offers a solution, allowing calculations to be performed on encrypted data without decrypting it.
This research compares traditional deep neural networks (DNNs) and SNNs using the Brakerski/Fan-Vercauteren (BFV) encryption scheme.
- Score: 5.274804664403783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) is widely used today, especially through deep neural
networks (DNNs), however, increasing computational load and resource
requirements have led to cloud-based solutions. To address this problem, a new
generation of networks called Spiking Neural Networks (SNN) has emerged, which
mimic the behavior of the human brain to improve efficiency and reduce energy
consumption. These networks often process large amounts of sensitive
information, such as confidential data, and thus privacy issues arise.
Homomorphic encryption (HE) offers a solution, allowing calculations to be
performed on encrypted data without decrypting it. This research compares
traditional DNNs and SNNs using the Brakerski/Fan-Vercauteren (BFV) encryption
scheme. The LeNet-5 model, a widely-used convolutional architecture, is used
for both DNN and SNN models based on the LeNet-5 architecture, and the networks
are trained and compared using the FashionMNIST dataset. The results show that
SNNs using HE achieve up to 40% higher accuracy than DNNs for low values of the
plaintext modulus t, although their execution time is longer due to their
time-coding nature with multiple time-steps.
Related papers
- MatchNAS: Optimizing Edge AI in Sparse-Label Data Contexts via
Automating Deep Neural Network Porting for Mobile Deployment [54.77943671991863]
MatchNAS is a novel scheme for porting Deep Neural Networks to mobile devices.
We optimise a large network family using both labelled and unlabelled data.
We then automatically search for tailored networks for different hardware platforms.
arXiv Detail & Related papers (2024-02-21T04:43:12Z) - Low Latency Conversion of Artificial Neural Network Models to
Rate-encoded Spiking Neural Networks [11.300257721586432]
Spiking neural networks (SNNs) are well suited for resource-constrained applications.
In a typical rate-encoded SNN, a series of binary spikes within a globally fixed time window is used to fire the neurons.
The aim of this paper is to reduce this while maintaining accuracy when converting ANNs to their equivalent SNNs.
arXiv Detail & Related papers (2022-10-27T08:13:20Z) - DPSNN: A Differentially Private Spiking Neural Network with Temporal
Enhanced Pooling [6.63071861272879]
Spiking neural network (SNN), the new generation of artificial neural networks, plays a crucial role in many fields.
This paper combines the differential privacy(DP) algorithm with SNN and proposes a differentially private spiking neural network (DPSNN)
The SNN uses discrete spike sequences to transmit information, combined with the gradient noise introduced by DP so that SNN maintains strong privacy protection.
arXiv Detail & Related papers (2022-05-24T05:27:53Z) - Deep Binary Reinforcement Learning for Scalable Verification [44.44006029119672]
We present an RL algorithm tailored specifically for binarized neural networks (BNNs)
After training BNNs for the Atari environments, we verify robustness properties.
arXiv Detail & Related papers (2022-03-11T01:20:23Z) - PrivateSNN: Fully Privacy-Preserving Spiking Neural Networks [6.336941090564427]
PrivateSNN aims to build low-power Spiking Neural Networks (SNNs) from a pre-trained ANN model without leaking sensitive information contained in a dataset.
We tackle two types of leakage problems: data leakage caused when the networks access real training data during an ANN-SNN conversion process.
In order to address the data leakage issue, we generate synthetic images from the pre-trained ANNs and convert ANNs to SNNs using generated images.
We observe that the encrypted PrivateSNN can be implemented not only without the huge performance drop but also with significant energy
arXiv Detail & Related papers (2021-04-07T22:14:02Z) - Deep Time Delay Neural Network for Speech Enhancement with Full Data
Learning [60.20150317299749]
This paper proposes a deep time delay neural network (TDNN) for speech enhancement with full data learning.
To make full use of the training data, we propose a full data learning method for speech enhancement.
arXiv Detail & Related papers (2020-11-11T06:32:37Z) - Spiking Neural Networks with Single-Spike Temporal-Coded Neurons for
Network Intrusion Detection [6.980076213134383]
Spiking neural network (SNN) is interesting due to its strong bio-plausibility and high energy efficiency.
However, its performance is falling far behind conventional deep neural networks (DNNs)
arXiv Detail & Related papers (2020-10-15T14:46:18Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z) - CodNN -- Robust Neural Networks From Coded Classification [27.38642191854458]
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution.
DNNs are highly sensitive to noise, whether adversarial or random.
This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving.
By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed.
arXiv Detail & Related papers (2020-04-22T17:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.