PolyDNN: Polynomial Representation of NN for Communication-less SMPC
Inference
- URL: http://arxiv.org/abs/2104.00863v1
- Date: Fri, 2 Apr 2021 02:59:37 GMT
- Title: PolyDNN: Polynomial Representation of NN for Communication-less SMPC
Inference
- Authors: Philip Derbeko and Shlomi Dolev
- Abstract summary: We show a way to complete networks into a single and how to calculate with an efficient and information-secure MPC algorithm.
The calculation is done without intermediate communication between the participating parties, which is beneficial in several cases.
- Score: 5.1779474453796865
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The structure and weights of Deep Neural Networks (DNN) typically encode and
contain very valuable information about the dataset that was used to train the
network.
One way to protect this information when DNN is published is to perform an
interference of the network using secure multi-party computations (MPC).
In this paper, we suggest a translation of deep neural networks to
polynomials, which are easier to calculate efficiently with MPC techniques.
We show a way to translate complete networks into a single polynomial and how
to calculate the polynomial with an efficient and information-secure MPC
algorithm.
The calculation is done without intermediate communication between the
participating parties, which is beneficial in several cases, as explained in
the paper.
Related papers
- Learning the Optimal Path and DNN Partition for Collaborative Edge Inference [4.368333109035076]
Deep Neural Networks (DNNs) have catalyzed the development of numerous intelligent mobile applications and services.
To address this, collaborative edge inference has been proposed.
This method involves partitioning a DNN inference task into several subtasks and distributing these across multiple network nodes.
We introduce a new bandit algorithm, B-EXPUCB, which combines elements of the classical blocked EXP3 and LinUCB algorithms, and demonstrate its sublinear regret.
arXiv Detail & Related papers (2024-10-02T01:12:16Z) - nn2poly: An R Package for Converting Neural Networks into Interpretable Polynomials [1.86413150130483]
The nn2poly package provides the implementation in R of the NN2 method to explain and interpret neural networks.
The package provides integration with the main deep learning framework packages in R.
Other neural networks packages can also be used by including their weights in list format.
arXiv Detail & Related papers (2024-06-03T17:59:30Z) - Defining Neural Network Architecture through Polytope Structures of Dataset [53.512432492636236]
This paper defines upper and lower bounds for neural network widths, which are informed by the polytope structure of the dataset in question.
We develop an algorithm to investigate a converse situation where the polytope structure of a dataset can be inferred from its corresponding trained neural networks.
It is established that popular datasets such as MNIST, Fashion-MNIST, and CIFAR10 can be efficiently encapsulated using no more than two polytopes with a small number of faces.
arXiv Detail & Related papers (2024-02-04T08:57:42Z) - Dynamic Semantic Compression for CNN Inference in Multi-access Edge
Computing: A Graph Reinforcement Learning-based Autoencoder [82.8833476520429]
We propose a novel semantic compression method, autoencoder-based CNN architecture (AECNN) for effective semantic extraction and compression in partial offloading.
In the semantic encoder, we introduce a feature compression module based on the channel attention mechanism in CNNs, to compress intermediate data by selecting the most informative features.
In the semantic decoder, we design a lightweight decoder to reconstruct the intermediate data through learning from the received compressed data to improve accuracy.
arXiv Detail & Related papers (2024-01-19T15:19:47Z) - Learning with Multigraph Convolutional Filters [153.20329791008095]
We introduce multigraph convolutional neural networks (MGNNs) as stacked and layered structures where information is processed according to an MSP model.
We also develop a procedure for tractable computation of filter coefficients in the MGNNs and a low cost method to reduce the dimensionality of the information transferred between layers.
arXiv Detail & Related papers (2022-10-28T17:00:50Z) - Over-the-Air Split Machine Learning in Wireless MIMO Networks [56.27831295707334]
In split machine learning (ML), different partitions of a neural network (NN) are executed by different computing nodes.
To ease communication burden, over-the-air computation (OAC) can efficiently implement all or part of the computation at the same time of communication.
arXiv Detail & Related papers (2022-10-07T15:39:11Z) - Receptive Field-based Segmentation for Distributed CNN Inference
Acceleration in Collaborative Edge Computing [93.67044879636093]
We study inference acceleration using distributed convolutional neural networks (CNNs) in collaborative edge computing network.
We propose a novel collaborative edge computing using fused-layer parallelization to partition a CNN model into multiple blocks of convolutional layers.
arXiv Detail & Related papers (2022-07-22T18:38:11Z) - HD-cos Networks: Efficient Neural Architectures for Secure Multi-Party
Computation [26.67099154998755]
Multi-party computation (MPC) is a branch of cryptography where multiple non-colluding parties execute a protocol to securely compute a function.
We study training and inference of neural networks under the MPC setup.
We show that both of the approaches enjoy strong theoretical motivations and efficient computation under the MPC setup.
arXiv Detail & Related papers (2021-10-28T21:15:11Z) - Efficient Representations for Privacy-Preserving Inference [3.330229314824913]
We construct and evaluate private CNNs on the MNIST and CIFAR-10 datasets.
We achieve over a two-fold reduction in the number of operations used for inferences of the CryptoNets architecture.
arXiv Detail & Related papers (2021-10-15T19:03:35Z) - Learning Autonomy in Management of Wireless Random Networks [102.02142856863563]
This paper presents a machine learning strategy that tackles a distributed optimization task in a wireless network with an arbitrary number of randomly interconnected nodes.
We develop a flexible deep neural network formalism termed distributed message-passing neural network (DMPNN) with forward and backward computations independent of the network topology.
arXiv Detail & Related papers (2021-06-15T09:03:28Z) - Computational Separation Between Convolutional and Fully-Connected
Networks [35.39956227364153]
We show how convolutional networks can leverage locality in the data, and thus achieve a computational advantage over fully-connected networks.
Specifically, we show a class of problems that can be efficiently solved using convolutional networks trained with gradient-descent.
arXiv Detail & Related papers (2020-10-03T14:24:59Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.