LPGNet: Link Private Graph Networks for Node Classification
- URL: http://arxiv.org/abs/2205.03105v1
- Date: Fri, 6 May 2022 09:38:35 GMT
- Title: LPGNet: Link Private Graph Networks for Node Classification
- Authors: Aashish Kolluri, Teodora Baluta, Bryan Hooi, Prateek Saxena
- Abstract summary: We present a new neural network architecture called LPGNet for training on graphs with privacy-sensitive edges.
LPGNet provides differential privacy guarantees for edges using a novel design for how graph edge structure is used during training.
- Score: 37.26462186216589
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Classification tasks on labeled graph-structured data have many important
applications ranging from social recommendation to financial modeling. Deep
neural networks are increasingly being used for node classification on graphs,
wherein nodes with similar features have to be given the same label. Graph
convolutional networks (GCNs) are one such widely studied neural network
architecture that perform well on this task. However, powerful link-stealing
attacks on GCNs have recently shown that even with black-box access to the
trained model, inferring which links (or edges) are present in the training
graph is practical. In this paper, we present a new neural network architecture
called LPGNet for training on graphs with privacy-sensitive edges. LPGNet
provides differential privacy (DP) guarantees for edges using a novel design
for how graph edge structure is used during training. We empirically show that
LPGNet models often lie in the sweet spot between providing privacy and
utility: They can offer better utility than "trivially" private architectures
which use no edge information (e.g., vanilla MLPs) and better resilience
against existing link-stealing attacks than vanilla GCNs which use the full
edge structure. LPGNet also offers consistently better privacy-utility
tradeoffs than DPGCN, which is the state-of-the-art mechanism for retrofitting
differential privacy into conventional GCNs, in most of our evaluated datasets.
Related papers
- Self-Attention Empowered Graph Convolutional Network for Structure
Learning and Node Embedding [5.164875580197953]
In representation learning on graph-structured data, many popular graph neural networks (GNNs) fail to capture long-range dependencies.
This paper proposes a novel graph learning framework called the graph convolutional network with self-attention (GCN-SA)
The proposed scheme exhibits an exceptional generalization capability in node-level representation learning.
arXiv Detail & Related papers (2024-03-06T05:00:31Z) - Privacy-preserving design of graph neural networks with applications to
vertical federated learning [56.74455367682945]
We present an end-to-end graph representation learning framework called VESPER.
VESPER is capable of training high-performance GNN models over both sparse and dense graphs under reasonable privacy budgets.
arXiv Detail & Related papers (2023-10-31T15:34:59Z) - FedHGN: A Federated Framework for Heterogeneous Graph Neural Networks [45.94642721490744]
Heterogeneous graph neural networks (HGNNs) can learn from typed and relational graph data more effectively than conventional GNNs.
With larger parameter spaces, HGNNs may require more training data, which is often scarce in real-world applications due to privacy regulations.
We propose FedHGN, a novel and general FGL framework for HGNNs.
arXiv Detail & Related papers (2023-05-16T18:01:49Z) - ProGAP: Progressive Graph Neural Networks with Differential Privacy
Guarantees [8.79398901328539]
Graph Neural Networks (GNNs) have become a popular tool for learning on graphs, but their widespread use raises privacy concerns.
We propose a new differentially private GNN called ProGAP that uses a progressive training scheme to improve such accuracy-privacy trade-offs.
arXiv Detail & Related papers (2023-04-18T12:08:41Z) - A Robust Stacking Framework for Training Deep Graph Models with
Multifaceted Node Features [61.92791503017341]
Graph Neural Networks (GNNs) with numerical node features and graph structure as inputs have demonstrated superior performance on various supervised learning tasks with graph data.
The best models for such data types in most standard supervised learning settings with IID (non-graph) data are not easily incorporated into a GNN.
Here we propose a robust stacking framework that fuses graph-aware propagation with arbitrary models intended for IID data.
arXiv Detail & Related papers (2022-06-16T22:46:33Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Self-Constructing Graph Convolutional Networks for Semantic Labeling [23.623276007011373]
We propose a novel architecture called the Self-Constructing Graph (SCG), which makes use of learnable latent variables to generate embeddings.
SCG can automatically obtain optimized non-local context graphs from complex-shaped objects in aerial imagery.
We demonstrate the effectiveness and flexibility of the proposed SCG on the publicly available ISPRS Vaihingen dataset.
arXiv Detail & Related papers (2020-03-15T21:55:24Z) - Graphs, Convolutions, and Neural Networks: From Graph Filters to Graph
Neural Networks [183.97265247061847]
We leverage graph signal processing to characterize the representation space of graph neural networks (GNNs)
We discuss the role of graph convolutional filters in GNNs and show that any architecture built with such filters has the fundamental properties of permutation equivariance and stability to changes in the topology.
We also study the use of GNNs in recommender systems and learning decentralized controllers for robot swarms.
arXiv Detail & Related papers (2020-03-08T13:02:15Z) - EdgeNets:Edge Varying Graph Neural Networks [179.99395949679547]
This paper puts forth a general framework that unifies state-of-the-art graph neural networks (GNNs) through the concept of EdgeNet.
An EdgeNet is a GNN architecture that allows different nodes to use different parameters to weigh the information of different neighbors.
This is a general linear and local operation that a node can perform and encompasses under one formulation all existing graph convolutional neural networks (GCNNs) as well as graph attention networks (GATs)
arXiv Detail & Related papers (2020-01-21T15:51:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.