S++: A Fast and Deployable Secure-Computation Framework for
Privacy-Preserving Neural Network Training
- URL: http://arxiv.org/abs/2101.12078v1
- Date: Thu, 28 Jan 2021 15:48:54 GMT
- Title: S++: A Fast and Deployable Secure-Computation Framework for
Privacy-Preserving Neural Network Training
- Authors: Prashanthi Ramachandran, Shivam Agarwal, Arup Mondal, Aastha Shah,
Debayan Gupta
- Abstract summary: We introduce S++, a simple, robust, and deployable framework for training a neural network (NN) using private data from multiple sources.
For the first time, we provide fast and verifiable protocols for all common activation functions and optimize them for running in a secret-shared manner.
- Score: 0.4893345190925178
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We introduce S++, a simple, robust, and deployable framework for training a
neural network (NN) using private data from multiple sources, using
secret-shared secure function evaluation. In short, consider a virtual third
party to whom every data-holder sends their inputs, and which computes the
neural network: in our case, this virtual third party is actually a set of
servers which individually learn nothing, even with a malicious (but
non-colluding) adversary.
Previous work in this area has been limited to just one specific activation
function: ReLU, rendering the approach impractical for many use-cases. For the
first time, we provide fast and verifiable protocols for all common activation
functions and optimize them for running in a secret-shared manner. The ability
to quickly, verifiably, and robustly compute exponentiation, softmax, sigmoid,
etc., allows us to use previously written NNs without modification, vastly
reducing developer effort and complexity of code. In recent times, ReLU has
been found to converge much faster and be more computationally efficient as
compared to non-linear functions like sigmoid or tanh. However, we argue that
it would be remiss not to extend the mechanism to non-linear functions such as
the logistic sigmoid, tanh, and softmax that are fundamental due to their
ability to express outputs as probabilities and their universal approximation
property. Their contribution in RNNs and a few recent advancements also makes
them more relevant.
Related papers
- Enhancing MOTION2NX for Efficient, Scalable and Secure Image Inference using Convolutional Neural Networks [4.407841002228536]
We use the ABY2.0 SMPC protocol implemented on the C++ based MOTION2NX framework for secure convolutional neural network (CNN) inference application with semi-honest security.
We also present a novel splitting algorithm that divides the computations at each CNN layer into multiple chunks.
arXiv Detail & Related papers (2024-08-29T09:50:21Z) - LinSATNet: The Positive Linear Satisfiability Neural Networks [116.65291739666303]
This paper studies how to introduce the popular positive linear satisfiability to neural networks.
We propose the first differentiable satisfiability layer based on an extension of the classic Sinkhorn algorithm for jointly encoding multiple sets of marginal distributions.
arXiv Detail & Related papers (2024-07-18T22:05:21Z) - OPAF: Optimized Secure Two-Party Computation Protocols for Nonlinear Activation Functions in Recurrent Neural Network [8.825150825838769]
This paper pays special attention to the implementation of non-linear functions in semi-honest model with two-party settings.
We propose a novel and efficient protocol for exponential function by using a divide-and-conquer strategy.
Next, we take advantage of the symmetry of sigmoid and Tanh, and fine-tune the inputs to reduce the 2PC building blocks.
arXiv Detail & Related papers (2024-03-01T02:49:40Z) - Efficient Privacy-Preserving Convolutional Spiking Neural Networks with
FHE [1.437446768735628]
Homomorphic Encryption (FHE) is a key technology for privacy-preserving computation.
FHE has limitations in processing continuous non-polynomial functions.
We present a framework called FHE-DiCSNN for homomorphic SNNs.
FHE-DiCSNN achieves an accuracy of 97.94% on ciphertexts, with a loss of only 0.53% compared to the original network's accuracy of 98.47%.
arXiv Detail & Related papers (2023-09-16T15:37:18Z) - Versatile Neural Processes for Learning Implicit Neural Representations [57.090658265140384]
We propose Versatile Neural Processes (VNP), which largely increases the capability of approximating functions.
Specifically, we introduce a bottleneck encoder that produces fewer and informative context tokens, relieving the high computational cost.
We demonstrate the effectiveness of the proposed VNP on a variety of tasks involving 1D, 2D and 3D signals.
arXiv Detail & Related papers (2023-01-21T04:08:46Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Comparative Analysis of Interval Reachability for Robust Implicit and
Feedforward Neural Networks [64.23331120621118]
We use interval reachability analysis to obtain robustness guarantees for implicit neural networks (INNs)
INNs are a class of implicit learning models that use implicit equations as layers.
We show that our approach performs at least as well as, and generally better than, applying state-of-the-art interval bound propagation methods to INNs.
arXiv Detail & Related papers (2022-04-01T03:31:27Z) - Training Feedback Spiking Neural Networks by Implicit Differentiation on
the Equilibrium State [66.2457134675891]
Spiking neural networks (SNNs) are brain-inspired models that enable energy-efficient implementation on neuromorphic hardware.
Most existing methods imitate the backpropagation framework and feedforward architectures for artificial neural networks.
We propose a novel training method that does not rely on the exact reverse of the forward computation.
arXiv Detail & Related papers (2021-09-29T07:46:54Z) - Convolutional Sparse Coding Fast Approximation with Application to
Seismic Reflectivity Estimation [9.005280130480308]
We propose a speed-up upgraded version of the classic iterative thresholding algorithm, that produces a good approximation of the convolutional sparse code within 2-5 iterations.
The performance of the proposed solution is demonstrated via the seismic inversion problem in both synthetic and real data scenarios.
arXiv Detail & Related papers (2021-06-29T12:19:07Z) - Efficient Integer-Arithmetic-Only Convolutional Neural Networks [87.01739569518513]
We replace conventional ReLU with Bounded ReLU and find that the decline is due to activation quantization.
Our integer networks achieve equivalent performance as the corresponding FPN networks, but have only 1/4 memory cost and run 2x faster on modern GPU.
arXiv Detail & Related papers (2020-06-21T08:23:03Z) - ARIANN: Low-Interaction Privacy-Preserving Deep Learning via Function
Secret Sharing [2.6228228854413356]
AriaNN is a low-interaction privacy-preserving framework for private neural network training and inference on sensitive data.
We design primitives for the building blocks of neural networks such as ReLU, MaxPool and BatchNorm.
We implement our framework as an extension to support n-party private federated learning.
arXiv Detail & Related papers (2020-06-08T13:40:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.