Sphynx: ReLU-Efficient Network Design for Private Inference
- URL: http://arxiv.org/abs/2106.11755v1
- Date: Thu, 17 Jun 2021 18:11:10 GMT
- Title: Sphynx: ReLU-Efficient Network Design for Private Inference
- Authors: Minsu Cho, Zahra Ghodsi, Brandon Reagen, Siddharth Garg, Chinmay Hegde
- Abstract summary: We focus on private inference (PI), where the goal is to perform inference on a user's data sample using a service provider's model.
Existing PI methods for deep networks enable cryptographically secure inference with little drop in functionality.
This paper presents Sphynx, a ReLU-efficient network design method based on micro-search strategies for convolutional cell design.
- Score: 49.73927340643812
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: The emergence of deep learning has been accompanied by privacy concerns
surrounding users' data and service providers' models. We focus on private
inference (PI), where the goal is to perform inference on a user's data sample
using a service provider's model. Existing PI methods for deep networks enable
cryptographically secure inference with little drop in functionality; however,
they incur severe latency costs, primarily caused by non-linear network
operations (such as ReLUs). This paper presents Sphynx, a ReLU-efficient
network design method based on micro-search strategies for convolutional cell
design. Sphynx achieves Pareto dominance over all existing private inference
methods on CIFAR-100. We also design large-scale networks that support
cryptographically private inference on Tiny-ImageNet and ImageNet.
Related papers
- GCON: Differentially Private Graph Convolutional Network via Objective Perturbation [27.279817693305183]
Graph Convolutional Networks (GCNs) are a popular machine learning model with a wide range of applications in graph analytics.
When the underlying graph data contains sensitive information such as interpersonal relationships, a GCN trained without privacy-protection measures could be exploited to extract private data.
We propose GCON, a novel and effective solution for training GCNs with edge differential privacy.
arXiv Detail & Related papers (2024-07-06T09:59:56Z) - Privacy-preserving design of graph neural networks with applications to
vertical federated learning [56.74455367682945]
We present an end-to-end graph representation learning framework called VESPER.
VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
arXiv Detail & Related papers (2023-10-31T15:34:59Z) - DeepReShape: Redesigning Neural Networks for Efficient Private Inference [3.7802450241986945]
Recent work has shown that FLOPs for PI can no longer be ignored and incur high latency penalties.
We develop DeepReShape, a technique that optimize neural network architectures under PI's constraints.
arXiv Detail & Related papers (2023-04-20T18:27:02Z) - Iterative Soft Shrinkage Learning for Efficient Image Super-Resolution [91.3781512926942]
Image super-resolution (SR) has witnessed extensive neural network designs from CNN to transformer architectures.
This work investigates the potential of network pruning for super-resolution iteration to take advantage of off-the-shelf network designs and reduce the underlying computational overhead.
We propose a novel Iterative Soft Shrinkage-Percentage (ISS-P) method by optimizing the sparse structure of a randomly network at each and tweaking unimportant weights with a small amount proportional to the magnitude scale on-the-fly.
arXiv Detail & Related papers (2023-03-16T21:06:13Z) - Bayesian Interpolation with Deep Linear Networks [92.1721532941863]
Characterizing how neural network depth, width, and dataset size jointly impact model quality is a central problem in deep learning theory.
We show that linear networks make provably optimal predictions at infinite depth.
We also show that with data-agnostic priors, Bayesian model evidence in wide linear networks is maximized at infinite depth.
arXiv Detail & Related papers (2022-12-29T20:57:46Z) - Large-Scale Privacy-Preserving Network Embedding against Private Link
Inference Attacks [12.434976161956401]
We address a novel problem of privacy-preserving network embedding against private link inference attacks.
We propose to perturb the original network by adding or removing links, and expect the embedding generated on the perturbed network can leak little information about private links but hold high utility for various downstream tasks.
arXiv Detail & Related papers (2022-05-28T13:59:39Z) - NeuralDP Differentially private neural networks by design [61.675604648670095]
We propose NeuralDP, a technique for privatising activations of some layer within a neural network.
We experimentally demonstrate on two datasets that our method offers substantially improved privacy-utility trade-offs compared to DP-SGD.
arXiv Detail & Related papers (2021-07-30T12:40:19Z) - Privacy Amplification by Decentralization [0.0]
We introduce a novel relaxation of local differential privacy (LDP) that naturally arises in fully decentralized protocols.
We study a decentralized model of computation where a token performs a walk on the network graph and is updated sequentially by the party who receives it.
We prove that the privacy-utility trade-offs of our algorithms significantly improve upon LDP, and in some cases even match what can be achieved with methods based on trusted/secure aggregation and shuffling.
arXiv Detail & Related papers (2020-12-09T21:33:33Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.