CryptoGCN: Fast and Scalable Homomorphically Encrypted Graph
Convolutional Network Inference
- URL: http://arxiv.org/abs/2209.11904v1
- Date: Sat, 24 Sep 2022 02:20:54 GMT
- Title: CryptoGCN: Fast and Scalable Homomorphically Encrypted Graph
Convolutional Network Inference
- Authors: Ran Ran, Nuo Xu, Wei Wang, Quan Gang, Jieming Yin, Wujie Wen
- Abstract summary: Cloud-based graph convolutional network (GCN) has demonstrated great success and potential in many privacy-sensitive applications.
Despite its high inference accuracy and performance on cloud, maintaining data privacy in GCN inference remains largely unexplored.
In this paper, we take an initial attempt towards this and develop $textitCryptoGCN$--a homomorphic encryption (HE) based GCN inference framework.
- Score: 12.03953896181613
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently cloud-based graph convolutional network (GCN) has demonstrated great
success and potential in many privacy-sensitive applications such as personal
healthcare and financial systems. Despite its high inference accuracy and
performance on cloud, maintaining data privacy in GCN inference, which is of
paramount importance to these practical applications, remains largely
unexplored. In this paper, we take an initial attempt towards this and develop
$\textit{CryptoGCN}$--a homomorphic encryption (HE) based GCN inference
framework. A key to the success of our approach is to reduce the tremendous
computational overhead for HE operations, which can be orders of magnitude
higher than its counterparts in the plaintext space. To this end, we develop an
approach that can effectively take advantage of the sparsity of matrix
operations in GCN inference to significantly reduce the computational overhead.
Specifically, we propose a novel AMA data formatting method and associated
spatial convolution methods, which can exploit the complex graph structure and
perform efficient matrix-matrix multiplication in HE computation and thus
greatly reduce the HE operations. We also develop a co-optimization framework
that can explore the trade offs among the accuracy, security level, and
computational overhead by judicious pruning and polynomial approximation of
activation module in GCNs. Based on the NTU-XVIEW skeleton joint dataset, i.e.,
the largest dataset evaluated homomorphically by far as we are aware of, our
experimental results demonstrate that $\textit{CryptoGCN}$ outperforms
state-of-the-art solutions in terms of the latency and number of homomorphic
operations, i.e., achieving as much as a 3.10$\times$ speedup on latency and
reduces the total Homomorphic Operation Count by 77.4\% with a small accuracy
loss of 1-1.5$\%$.
Related papers
- LinGCN: Structural Linearized Graph Convolutional Network for
Homomorphically Encrypted Inference [19.5669231249754]
We present LinGCN, a framework designed to reduce multiplication depth and optimize the performance of HE based GCN inference.
Remarkably, LinGCN achieves a 14.2x latency speedup relative to CryptoGCN, while preserving an inference accuracy of 75% and notably reducing multiplication depth.
arXiv Detail & Related papers (2023-09-25T17:56:54Z) - Efficient Privacy-Preserving Convolutional Spiking Neural Networks with
FHE [1.437446768735628]
Homomorphic Encryption (FHE) is a key technology for privacy-preserving computation.
FHE has limitations in processing continuous non-polynomial functions.
We present a framework called FHE-DiCSNN for homomorphic SNNs.
FHE-DiCSNN achieves an accuracy of 97.94% on ciphertexts, with a loss of only 0.53% compared to the original network's accuracy of 98.47%.
arXiv Detail & Related papers (2023-09-16T15:37:18Z) - Efficient Dataset Distillation Using Random Feature Approximation [109.07737733329019]
We propose a novel algorithm that uses a random feature approximation (RFA) of the Neural Network Gaussian Process (NNGP) kernel.
Our algorithm provides at least a 100-fold speedup over KIP and can run on a single GPU.
Our new method, termed an RFA Distillation (RFAD), performs competitively with KIP and other dataset condensation algorithms in accuracy over a range of large-scale datasets.
arXiv Detail & Related papers (2022-10-21T15:56:13Z) - A Comprehensive Study on Large-Scale Graph Training: Benchmarking and
Rethinking [124.21408098724551]
Large-scale graph training is a notoriously challenging problem for graph neural networks (GNNs)
We present a new ensembling training manner, named EnGCN, to address the existing issues.
Our proposed method has achieved new state-of-the-art (SOTA) performance on large-scale datasets.
arXiv Detail & Related papers (2022-10-14T03:43:05Z) - Comprehensive Graph Gradual Pruning for Sparse Training in Graph Neural
Networks [52.566735716983956]
We propose a graph gradual pruning framework termed CGP to dynamically prune GNNs.
Unlike LTH-based methods, the proposed CGP approach requires no re-training, which significantly reduces the computation costs.
Our proposed strategy greatly improves both training and inference efficiency while matching or even exceeding the accuracy of existing methods.
arXiv Detail & Related papers (2022-07-18T14:23:31Z) - SCGC : Self-Supervised Contrastive Graph Clustering [1.1470070927586016]
Graph clustering discovers groups or communities within networks.
Deep learning methods such as autoencoders cannot incorporate rich structural information.
We propose Self-Supervised Contrastive Graph Clustering (SCGC)
arXiv Detail & Related papers (2022-04-27T01:38:46Z) - Towards Efficient Graph Convolutional Networks for Point Cloud Handling [181.59146413326056]
We aim at improving the computational efficiency of graph convolutional networks (GCNs) for learning on point clouds.
A series of experiments show that optimized networks have reduced computational complexity, decreased memory consumption, and accelerated inference speed.
arXiv Detail & Related papers (2021-04-12T17:59:16Z) - Temporal Attention-Augmented Graph Convolutional Network for Efficient
Skeleton-Based Human Action Recognition [97.14064057840089]
Graphal networks (GCNs) have been very successful in modeling non-Euclidean data structures.
Most GCN-based action recognition methods use deep feed-forward networks with high computational complexity to process all skeletons in an action.
We propose a temporal attention module (TAM) for increasing the efficiency in skeleton-based action recognition.
arXiv Detail & Related papers (2020-10-23T08:01:55Z) - Faster Secure Data Mining via Distributed Homomorphic Encryption [108.77460689459247]
Homomorphic Encryption (HE) is receiving more and more attention recently for its capability to do computations over the encrypted field.
We propose a novel general distributed HE-based data mining framework towards one step of solving the scaling problem.
We verify the efficiency and effectiveness of our new framework by testing over various data mining algorithms and benchmark data-sets.
arXiv Detail & Related papers (2020-06-17T18:14:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.