Mitigating the Performance Sacrifice in DP-Satisfied Federated Settings
through Graph Contrastive Learning
- URL: http://arxiv.org/abs/2207.11836v3
- Date: Sun, 20 Aug 2023 01:28:25 GMT
- Title: Mitigating the Performance Sacrifice in DP-Satisfied Federated Settings
through Graph Contrastive Learning
- Authors: Haoran Yang, Xiangyu Zhao, Muyang Li, Hongxu Chen, Guandong Xu
- Abstract summary: We investigate how differential privacy (DP) can be implemented on graph edges and observe a performance decrease.
Inspired by this, we propose leveraging graph contrastive learning to alleviate the performance drop resulting from DP.
Extensive experiments conducted with four representative graph models on five widely used benchmark datasets show that contrastive learning indeed alleviates the models' DP-induced performance drops.
- Score: 43.73753083910439
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Currently, graph learning models are indispensable tools to help researchers
explore graph-structured data. In academia, using sufficient training data to
optimize a graph model on a single device is a typical approach for training a
capable graph learning model. Due to privacy concerns, however, it is
infeasible to do so in real-world scenarios. Federated learning provides a
practical means of addressing this limitation by introducing various
privacy-preserving mechanisms, such as differential privacy (DP) on the graph
edges. However, although DP in federated graph learning can ensure the security
of sensitive information represented in graphs, it usually causes the
performance of graph learning models to degrade. In this paper, we investigate
how DP can be implemented on graph edges and observe a performance decrease in
our experiments. In addition, we note that DP on graph edges introduces noise
that perturbs graph proximity, which is one of the graph augmentations in graph
contrastive learning. Inspired by this, we propose leveraging graph contrastive
learning to alleviate the performance drop resulting from DP. Extensive
experiments conducted with four representative graph models on five widely used
benchmark datasets show that contrastive learning indeed alleviates the models'
DP-induced performance drops.
Related papers
- A Survey of Deep Graph Learning under Distribution Shifts: from Graph Out-of-Distribution Generalization to Adaptation [59.14165404728197]
We provide an up-to-date and forward-looking review of deep graph learning under distribution shifts.
Specifically, we cover three primary scenarios: graph OOD generalization, training-time graph OOD adaptation, and test-time graph OOD adaptation.
To provide a better understanding of the literature, we systematically categorize the existing models based on our proposed taxonomy.
arXiv Detail & Related papers (2024-10-25T02:39:56Z) - CORE: Data Augmentation for Link Prediction via Information Bottleneck [25.044734252779975]
Link prediction (LP) is a fundamental task in graph representation learning.
We propose a novel data augmentation method, COmplete and REduce (CORE) to learn compact and predictive augmentations for LP models.
arXiv Detail & Related papers (2024-04-17T03:20:42Z) - Robust Causal Graph Representation Learning against Confounding Effects [21.380907101361643]
We propose Robust Causal Graph Representation Learning (RCGRL) to learn robust graph representations against confounding effects.
RCGRL introduces an active approach to generate instrumental variables under unconditional moment restrictions, which empowers the graph representation learning model to eliminate confounders.
arXiv Detail & Related papers (2022-08-18T01:31:25Z) - Features Based Adaptive Augmentation for Graph Contrastive Learning [0.0]
Self-Supervised learning aims to eliminate the need for expensive annotation in graph representation learning.
We introduce a Feature Based Adaptive Augmentation (FebAA) approach, which identifies and preserves potentially influential features.
We successfully improved the accuracy of GRACE and BGRL on eight graph representation learning's benchmark datasets.
arXiv Detail & Related papers (2022-07-05T03:41:20Z) - GraphCoCo: Graph Complementary Contrastive Learning [65.89743197355722]
Graph Contrastive Learning (GCL) has shown promising performance in graph representation learning (GRL) without the supervision of manual annotations.
This paper proposes an effective graph complementary contrastive learning approach named GraphCoCo to tackle the above issue.
arXiv Detail & Related papers (2022-03-24T02:58:36Z) - Self-Supervised Representation Learning via Latent Graph Prediction [41.64774038444827]
Self-supervised learning (SSL) of graph neural networks is emerging as a promising way of leveraging unlabeled data.
We propose the LaGraph, a theoretically grounded predictive SSL framework based on latent graph prediction.
Our experimental results demonstrate the superiority of LaGraph in performance and the robustness to decreasing of training sample size on both graph-level and node-level tasks.
arXiv Detail & Related papers (2022-02-16T21:10:33Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - Iterative Graph Self-Distillation [161.04351580382078]
We propose a novel unsupervised graph learning paradigm called Iterative Graph Self-Distillation (IGSD)
IGSD iteratively performs the teacher-student distillation with graph augmentations.
We show that we achieve significant and consistent performance gain on various graph datasets in both unsupervised and semi-supervised settings.
arXiv Detail & Related papers (2020-10-23T18:37:06Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.