LinGCN: Structural Linearized Graph Convolutional Network for
Homomorphically Encrypted Inference
- URL: http://arxiv.org/abs/2309.14331v3
- Date: Wed, 4 Oct 2023 22:58:55 GMT
- Title: LinGCN: Structural Linearized Graph Convolutional Network for
Homomorphically Encrypted Inference
- Authors: Hongwu Peng and Ran Ran and Yukui Luo and Jiahui Zhao and Shaoyi Huang
and Kiran Thorat and Tong Geng and Chenghong Wang and Xiaolin Xu and Wujie
Wen and Caiwen Ding
- Abstract summary: We present LinGCN, a framework designed to reduce multiplication depth and optimize the performance of HE based GCN inference.
Remarkably, LinGCN achieves a 14.2x latency speedup relative to CryptoGCN, while preserving an inference accuracy of 75% and notably reducing multiplication depth.
- Score: 19.5669231249754
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The growth of Graph Convolution Network (GCN) model sizes has revolutionized
numerous applications, surpassing human performance in areas such as personal
healthcare and financial systems. The deployment of GCNs in the cloud raises
privacy concerns due to potential adversarial attacks on client data. To
address security concerns, Privacy-Preserving Machine Learning (PPML) using
Homomorphic Encryption (HE) secures sensitive client data. However, it
introduces substantial computational overhead in practical applications. To
tackle those challenges, we present LinGCN, a framework designed to reduce
multiplication depth and optimize the performance of HE based GCN inference.
LinGCN is structured around three key elements: (1) A differentiable structural
linearization algorithm, complemented by a parameterized discrete indicator
function, co-trained with model weights to meet the optimization goal. This
strategy promotes fine-grained node-level non-linear location selection,
resulting in a model with minimized multiplication depth. (2) A compact
node-wise polynomial replacement policy with a second-order trainable
activation function, steered towards superior convergence by a two-level
distillation approach from an all-ReLU based teacher model. (3) an enhanced HE
solution that enables finer-grained operator fusion for node-wise activation
functions, further reducing multiplication level consumption in HE-based
inference. Our experiments on the NTU-XVIEW skeleton joint dataset reveal that
LinGCN excels in latency, accuracy, and scalability for homomorphically
encrypted inference, outperforming solutions such as CryptoGCN. Remarkably,
LinGCN achieves a 14.2x latency speedup relative to CryptoGCN, while preserving
an inference accuracy of 75% and notably reducing multiplication depth.
Related papers
- Learning to Control the Smoothness of Graph Convolutional Network Features [9.949988676706418]
We propose a new strategy to let graph convolutional network (GCN) learn node features with a desired smoothness.
Our approach has three key steps: We establish a geometric relationship between the input and output of ReLU or leaky ReLU.
Building on our geometric insights, we augment the message-passing process of graph convolutional layers with a learnable term to modulate the smoothness of node features.
arXiv Detail & Related papers (2024-10-18T16:57:27Z) - Scalable Graph Compressed Convolutions [68.85227170390864]
We propose a differentiable method that applies permutations to calibrate input graphs for Euclidean convolution.
Based on the graph calibration, we propose the Compressed Convolution Network (CoCN) for hierarchical graph representation learning.
arXiv Detail & Related papers (2024-07-26T03:14:13Z) - Verifying message-passing neural networks via topology-based bounds tightening [3.3267518043390205]
We develop a computationally effective approach towards providing robust certificates for message-passing neural networks (MPNNs)
Because our work builds on mixed-integer optimization, it encodes a wide variety of subproblems.
We test on both node and graph classification problems and consider topological attacks that both add and remove edges.
arXiv Detail & Related papers (2024-02-21T17:05:27Z) - Fixing the NTK: From Neural Network Linearizations to Exact Convex
Programs [63.768739279562105]
We show that for a particular choice of mask weights that do not depend on the learning targets, this kernel is equivalent to the NTK of the gated ReLU network on the training data.
A consequence of this lack of dependence on the targets is that the NTK cannot perform better than the optimal MKL kernel on the training set.
arXiv Detail & Related papers (2023-09-26T17:42:52Z) - Learning k-Level Structured Sparse Neural Networks Using Group Envelope Regularization [4.0554893636822]
We introduce a novel approach to deploy large-scale Deep Neural Networks on constrained resources.
The method speeds up inference time and aims to reduce memory demand and power consumption.
arXiv Detail & Related papers (2022-12-25T15:40:05Z) - Binary Graph Convolutional Network with Capacity Exploration [58.99478502486377]
We propose a Binary Graph Convolutional Network (Bi-GCN), which binarizes both the network parameters and input node attributes.
Our Bi-GCN can reduce the memory consumption by an average of 31x for both the network parameters and input data, and accelerate the inference speed by an average of 51x.
arXiv Detail & Related papers (2022-10-24T12:05:17Z) - CryptoGCN: Fast and Scalable Homomorphically Encrypted Graph
Convolutional Network Inference [12.03953896181613]
Cloud-based graph convolutional network (GCN) has demonstrated great success and potential in many privacy-sensitive applications.
Despite its high inference accuracy and performance on cloud, maintaining data privacy in GCN inference remains largely unexplored.
In this paper, we take an initial attempt towards this and develop $textitCryptoGCN$--a homomorphic encryption (HE) based GCN inference framework.
arXiv Detail & Related papers (2022-09-24T02:20:54Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - Edge Rewiring Goes Neural: Boosting Network Resilience via Policy
Gradient [62.660451283548724]
ResiNet is a reinforcement learning framework to discover resilient network topologies against various disasters and attacks.
We show that ResiNet achieves a near-optimal resilience gain on multiple graphs while balancing the utility, with a large margin compared to existing approaches.
arXiv Detail & Related papers (2021-10-18T06:14:28Z) - Towards Efficient Graph Convolutional Networks for Point Cloud Handling [181.59146413326056]
We aim at improving the computational efficiency of graph convolutional networks (GCNs) for learning on point clouds.
A series of experiments show that optimized networks have reduced computational complexity, decreased memory consumption, and accelerated inference speed.
arXiv Detail & Related papers (2021-04-12T17:59:16Z) - An Uncoupled Training Architecture for Large Graph Learning [20.784230322205232]
We present Node2Grids, a flexible uncoupled training framework for embedding graph data into grid-like data.
By ranking each node's influence through degree, Node2Grids selects the most influential first-order as well as second-order neighbors with central node fusion information.
For further improving the efficiency of downstream tasks, a simple CNN-based neural network is employed to capture the significant information from the mapped grid-like data.
arXiv Detail & Related papers (2020-03-21T11:49:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.