A Homomorphic Encryption Framework for Privacy-Preserving Spiking Neural
Networks
- URL: http://arxiv.org/abs/2308.05636v2
- Date: Thu, 12 Oct 2023 11:49:58 GMT
- Title: A Homomorphic Encryption Framework for Privacy-Preserving Spiking Neural
Networks
- Authors: Farzad Nikfam, Raffaele Casaburi, Alberto Marchisio, Maurizio Martina
and Muhammad Shafique
- Abstract summary: Spiking Neural Networks (SNNs) mimic the behavior of the human brain to improve efficiency and reduce energy consumption.
Homomorphic encryption (HE) offers a solution, allowing calculations to be performed on encrypted data without decrypting it.
This research compares traditional deep neural networks (DNNs) and SNNs using the Brakerski/Fan-Vercauteren (BFV) encryption scheme.
- Score: 5.274804664403783
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Machine learning (ML) is widely used today, especially through deep neural
networks (DNNs), however, increasing computational load and resource
requirements have led to cloud-based solutions. To address this problem, a new
generation of networks called Spiking Neural Networks (SNN) has emerged, which
mimic the behavior of the human brain to improve efficiency and reduce energy
consumption. These networks often process large amounts of sensitive
information, such as confidential data, and thus privacy issues arise.
Homomorphic encryption (HE) offers a solution, allowing calculations to be
performed on encrypted data without decrypting it. This research compares
traditional DNNs and SNNs using the Brakerski/Fan-Vercauteren (BFV) encryption
scheme. The LeNet-5 model, a widely-used convolutional architecture, is used
for both DNN and SNN models based on the LeNet-5 architecture, and the networks
are trained and compared using the FashionMNIST dataset. The results show that
SNNs using HE achieve up to 40% higher accuracy than DNNs for low values of the
plaintext modulus t, although their execution time is longer due to their
time-coding nature with multiple time-steps.
Related papers
- NAS-BNN: Neural Architecture Search for Binary Neural Networks [55.058512316210056]
We propose a novel neural architecture search scheme for binary neural networks, named NAS-BNN.
Our discovered binary model family outperforms previous BNNs for a wide range of operations (OPs) from 20M to 200M.
In addition, we validate the transferability of these searched BNNs on the object detection task, and our binary detectors with the searched BNNs achieve a novel state-of-the-art result, e.g., 31.6% mAP with 370M OPs, on MS dataset.
arXiv Detail & Related papers (2024-08-28T02:17:58Z) - Adaptive Spiking Neural Networks with Hybrid Coding [0.0]
Spi-temporal Neural Network (SNN) is a more energy-efficient and effective neural network compared to Artificial Neural Networks (ANNs)
Traditional SNNs utilize same neurons when processing input data across different time steps, limiting their ability to integrate and utilizetemporal information effectively.
This paper introduces a hybrid encoding approach that not only reduces the required time steps for training but also continues to improve the overall network performance.
arXiv Detail & Related papers (2024-08-22T13:58:35Z) - RSC-SNN: Exploring the Trade-off Between Adversarial Robustness and Accuracy in Spiking Neural Networks via Randomized Smoothing Coding [17.342181435229573]
Spiking Neural Networks (SNNs) have received widespread attention due to their unique neuronal dynamics and low-power nature.
Previous research empirically shows that SNNs with Poisson coding are more robust than Artificial Neural Networks (ANNs) on small-scale datasets.
This work theoretically demonstrates that SNN's inherent adversarial robustness stems from its Poisson coding.
arXiv Detail & Related papers (2024-07-29T15:26:15Z) - MatchNAS: Optimizing Edge AI in Sparse-Label Data Contexts via
Automating Deep Neural Network Porting for Mobile Deployment [54.77943671991863]
MatchNAS is a novel scheme for porting Deep Neural Networks to mobile devices.
We optimise a large network family using both labelled and unlabelled data.
We then automatically search for tailored networks for different hardware platforms.
arXiv Detail & Related papers (2024-02-21T04:43:12Z) - PrivateSNN: Fully Privacy-Preserving Spiking Neural Networks [6.336941090564427]
PrivateSNN aims to build low-power Spiking Neural Networks (SNNs) from a pre-trained ANN model without leaking sensitive information contained in a dataset.
We tackle two types of leakage problems: data leakage caused when the networks access real training data during an ANN-SNN conversion process.
In order to address the data leakage issue, we generate synthetic images from the pre-trained ANNs and convert ANNs to SNNs using generated images.
We observe that the encrypted PrivateSNN can be implemented not only without the huge performance drop but also with significant energy
arXiv Detail & Related papers (2021-04-07T22:14:02Z) - Deep Time Delay Neural Network for Speech Enhancement with Full Data
Learning [60.20150317299749]
This paper proposes a deep time delay neural network (TDNN) for speech enhancement with full data learning.
To make full use of the training data, we propose a full data learning method for speech enhancement.
arXiv Detail & Related papers (2020-11-11T06:32:37Z) - Spiking Neural Networks with Single-Spike Temporal-Coded Neurons for
Network Intrusion Detection [6.980076213134383]
Spiking neural network (SNN) is interesting due to its strong bio-plausibility and high energy efficiency.
However, its performance is falling far behind conventional deep neural networks (DNNs)
arXiv Detail & Related papers (2020-10-15T14:46:18Z) - Progressive Tandem Learning for Pattern Recognition with Deep Spiking
Neural Networks [80.15411508088522]
Spiking neural networks (SNNs) have shown advantages over traditional artificial neural networks (ANNs) for low latency and high computational efficiency.
We propose a novel ANN-to-SNN conversion and layer-wise learning framework for rapid and efficient pattern recognition.
arXiv Detail & Related papers (2020-07-02T15:38:44Z) - Boosting Deep Neural Networks with Geometrical Prior Knowledge: A Survey [77.99182201815763]
Deep Neural Networks (DNNs) achieve state-of-the-art results in many different problem settings.
DNNs are often treated as black box systems, which complicates their evaluation and validation.
One promising field, inspired by the success of convolutional neural networks (CNNs) in computer vision tasks, is to incorporate knowledge about symmetric geometrical transformations.
arXiv Detail & Related papers (2020-06-30T14:56:05Z) - You Only Spike Once: Improving Energy-Efficient Neuromorphic Inference
to ANN-Level Accuracy [51.861168222799186]
Spiking Neural Networks (SNNs) are a type of neuromorphic, or brain-inspired network.
SNNs are sparse, accessing very few weights, and typically only use addition operations instead of the more power-intensive multiply-and-accumulate operations.
In this work, we aim to overcome the limitations of TTFS-encoded neuromorphic systems.
arXiv Detail & Related papers (2020-06-03T15:55:53Z) - CodNN -- Robust Neural Networks From Coded Classification [27.38642191854458]
Deep Neural Networks (DNNs) are a revolutionary force in the ongoing information revolution.
DNNs are highly sensitive to noise, whether adversarial or random.
This poses a fundamental challenge for hardware implementations of DNNs, and for their deployment in critical applications such as autonomous driving.
By our approach, either the data or internal layers of the DNN are coded with error correcting codes, and successful computation under noise is guaranteed.
arXiv Detail & Related papers (2020-04-22T17:07:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.