Efficient Privacy-Preserving Convolutional Spiking Neural Networks with
FHE
- URL: http://arxiv.org/abs/2309.09025v1
- Date: Sat, 16 Sep 2023 15:37:18 GMT
- Title: Efficient Privacy-Preserving Convolutional Spiking Neural Networks with
FHE
- Authors: Pengbo Li, Huifang Huang, Ting Gao, Jin Guo, Jinqiao Duan
- Abstract summary: Homomorphic Encryption (FHE) is a key technology for privacy-preserving computation.
FHE has limitations in processing continuous non-polynomial functions.
We present a framework called FHE-DiCSNN for homomorphic SNNs.
FHE-DiCSNN achieves an accuracy of 97.94% on ciphertexts, with a loss of only 0.53% compared to the original network's accuracy of 98.47%.
- Score: 1.437446768735628
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the rapid development of AI technology, we have witnessed numerous
innovations and conveniences. However, along with these advancements come
privacy threats and risks. Fully Homomorphic Encryption (FHE) emerges as a key
technology for privacy-preserving computation, enabling computations while
maintaining data privacy. Nevertheless, FHE has limitations in processing
continuous non-polynomial functions as it is restricted to discrete integers
and supports only addition and multiplication. Spiking Neural Networks (SNNs)
operate on discrete spike signals, naturally aligning with the properties of
FHE. In this paper, we present a framework called FHE-DiCSNN. This framework is
based on the efficient TFHE scheme and leverages the discrete properties of
SNNs to achieve high prediction performance on ciphertexts. Firstly, by
employing bootstrapping techniques, we successfully implement computations of
the Leaky Integrate-and-Fire neuron model on ciphertexts. Through
bootstrapping, we can facilitate computations for SNNs of arbitrary depth. This
framework can be extended to other spiking neuron models, providing a novel
framework for the homomorphic evaluation of SNNs. Secondly, inspired by CNNs,
we adopt convolutional methods to replace Poisson encoding. This not only
enhances accuracy but also mitigates the issue of prolonged simulation time
caused by random encoding. Furthermore, we employ engineering techniques to
parallelize the computation of bootstrapping, resulting in a significant
improvement in computational efficiency. Finally, we evaluate our model on the
MNIST dataset. Experimental results demonstrate that, with the optimal
parameter configuration, FHE-DiCSNN achieves an accuracy of 97.94% on
ciphertexts, with a loss of only 0.53% compared to the original network's
accuracy of 98.47%. Moreover, each prediction requires only 0.75 seconds of
computation time
Related papers
- Toward Practical Privacy-Preserving Convolutional Neural Networks Exploiting Fully Homomorphic Encryption [11.706881389387242]
Homomorphic encryption (FHE) is a viable approach for achieving private inference (PI)
FHE implementation of a CNN faces significant hurdles, primarily due to FHE's substantial computational and memory overhead.
We propose a set of optimizations, which includes GPU/ASIC acceleration, an efficient activation function, and an optimized packing scheme.
arXiv Detail & Related papers (2023-10-25T10:24:35Z) - Timing-Based Backpropagation in Spiking Neural Networks Without
Single-Spike Restrictions [2.8360662552057323]
We propose a novel backpropagation algorithm for training spiking neural networks (SNNs)
It encodes information in the relative multiple spike timing of individual neurons without single-spike restrictions.
arXiv Detail & Related papers (2022-11-29T11:38:33Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - An Adaptive Device-Edge Co-Inference Framework Based on Soft
Actor-Critic [72.35307086274912]
High-dimension parameter model and large-scale mathematical calculation restrict execution efficiency, especially for Internet of Things (IoT) devices.
We propose a new Deep Reinforcement Learning (DRL)-Soft Actor Critic for discrete (SAC-d), which generates the emphexit point, emphexit point, and emphcompressing bits by soft policy iterations.
Based on the latency and accuracy aware reward design, such an computation can well adapt to the complex environment like dynamic wireless channel and arbitrary processing, and is capable of supporting the 5G URL
arXiv Detail & Related papers (2022-01-09T09:31:50Z) - Spike time displacement based error backpropagation in convolutional
spiking neural networks [0.6193838300896449]
In this paper, we extend the STiDi-BP algorithm to employ it in deeper and convolutional architectures.
The evaluation results on the image classification task based on two popular benchmarks, MNIST and Fashion-MNIST, confirm that this algorithm has been applicable in deep SNNs.
We consider a convolutional SNN with two sets of weights: real-valued weights that are updated in the backward pass and their signs, binary weights, that are employed in the feedforward process.
arXiv Detail & Related papers (2021-08-31T05:18:59Z) - Quantized Neural Networks via {-1, +1} Encoding Decomposition and
Acceleration [83.84684675841167]
We propose a novel encoding scheme using -1, +1 to decompose quantized neural networks (QNNs) into multi-branch binary networks.
We validate the effectiveness of our method on large-scale image classification, object detection, and semantic segmentation tasks.
arXiv Detail & Related papers (2021-06-18T03:11:15Z) - Towards Scalable and Privacy-Preserving Deep Neural Network via
Algorithmic-Cryptographic Co-design [28.789702559193675]
We propose SPNN - a Scalable and Privacy-preserving deep Neural Network learning framework.
From cryptographic perspective, we propose using two types of cryptographic techniques, i.e., secret sharing and homomorphic encryption.
Experimental results conducted on real-world datasets demonstrate the superiority of SPNN.
arXiv Detail & Related papers (2020-12-17T02:26:16Z) - Self-Directed Online Machine Learning for Topology Optimization [58.920693413667216]
Self-directed Online Learning Optimization integrates Deep Neural Network (DNN) with Finite Element Method (FEM) calculations.
Our algorithm was tested by four types of problems including compliance minimization, fluid-structure optimization, heat transfer enhancement and truss optimization.
It reduced the computational time by 2 5 orders of magnitude compared with directly using methods, and outperformed all state-of-the-art algorithms tested in our experiments.
arXiv Detail & Related papers (2020-02-04T20:00:28Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.