FedGraph: Federated Graph Learning with Intelligent Sampling
- URL: http://arxiv.org/abs/2111.01370v1
- Date: Tue, 2 Nov 2021 04:58:03 GMT
- Title: FedGraph: Federated Graph Learning with Intelligent Sampling
- Authors: Fahao Chen, Peng Li, Toshiaki Miyazaki, and Celimuge Wu
- Abstract summary: Federated learning has attracted much research attention due to its privacy protection in distributed machine learning.
Existing work of federated learning mainly focuses on Convolutional Neural Network (CNN), which cannot efficiently handle graph data that are popular in many applications.
In this paper, we propose FedGraph for federated graph learning among multiple computing clients, each of which holds a subgraph.
- Score: 7.798227884125872
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Federated learning has attracted much research attention due to its privacy
protection in distributed machine learning. However, existing work of federated
learning mainly focuses on Convolutional Neural Network (CNN), which cannot
efficiently handle graph data that are popular in many applications. Graph
Convolutional Network (GCN) has been proposed as one of the most promising
techniques for graph learning, but its federated setting has been seldom
explored. In this paper, we propose FedGraph for federated graph learning among
multiple computing clients, each of which holds a subgraph. FedGraph provides
strong graph learning capability across clients by addressing two unique
challenges. First, traditional GCN training needs feature data sharing among
clients, leading to risk of privacy leakage. FedGraph solves this issue using a
novel cross-client convolution operation. The second challenge is high GCN
training overhead incurred by large graph size. We propose an intelligent graph
sampling algorithm based on deep reinforcement learning, which can
automatically converge to the optimal sampling policies that balance training
speed and accuracy. We implement FedGraph based on PyTorch and deploy it on a
testbed for performance evaluation. The experimental results of four popular
datasets demonstrate that FedGraph significantly outperforms existing work by
enabling faster convergence to higher accuracy.
Related papers
- FedGraph: A Research Library and Benchmark for Federated Graph Learning [40.257355007504074]
We introduce FedGraph, a research library built for practical distributed deployment and benchmarking in federated graph learning.
FedGraph supports a range of state-of-the-art graph learning methods and includes built-in profiling tools to evaluate system performance.
We demonstrate the first privacy-preserving federated learning system to run on graphs with 100 million nodes.
arXiv Detail & Related papers (2024-10-08T20:18:18Z) - One Node Per User: Node-Level Federated Learning for Graph Neural Networks [7.428431479479646]
We propose a novel framework for node-level federated graph learning.
We introduce a graph Laplacian term based on the feature vector's latent representation to regulate the user-side model updates.
arXiv Detail & Related papers (2024-09-29T02:16:07Z) - OpenGraph: Towards Open Graph Foundation Models [20.401374302429627]
Graph Neural Networks (GNNs) have emerged as promising techniques for encoding structural information.
Key challenge remains: the difficulty of generalizing to unseen graph data with different properties.
We propose a novel graph foundation model, called OpenGraph, to address this challenge.
arXiv Detail & Related papers (2024-03-02T08:05:03Z) - SimTeG: A Frustratingly Simple Approach Improves Textual Graph Learning [131.04781590452308]
We present SimTeG, a frustratingly Simple approach for Textual Graph learning.
We first perform supervised parameter-efficient fine-tuning (PEFT) on a pre-trained LM on the downstream task.
We then generate node embeddings using the last hidden states of finetuned LM.
arXiv Detail & Related papers (2023-08-03T07:00:04Z) - Neural Graph Matching for Pre-training Graph Neural Networks [72.32801428070749]
Graph neural networks (GNNs) have been shown powerful capacity at modeling structural data.
We present a novel Graph Matching based GNN Pre-Training framework, called GMPT.
The proposed method can be applied to fully self-supervised pre-training and coarse-grained supervised pre-training.
arXiv Detail & Related papers (2022-03-03T09:53:53Z) - GraphMI: Extracting Private Graph Data from Graph Neural Networks [59.05178231559796]
We present textbfGraph textbfModel textbfInversion attack (GraphMI), which aims to extract private graph data of the training graph by inverting GNN.
Specifically, we propose a projected gradient module to tackle the discreteness of graph edges while preserving the sparsity and smoothness of graph features.
We design a graph auto-encoder module to efficiently exploit graph topology, node attributes, and target model parameters for edge inference.
arXiv Detail & Related papers (2021-06-05T07:07:52Z) - GraphTheta: A Distributed Graph Neural Network Learning System With
Flexible Training Strategy [5.466414428765544]
We present a new distributed graph learning system GraphTheta.
It supports multiple training strategies and enables efficient and scalable learning on big graphs.
This work represents the largest edge-attributed GNN learning task conducted on a billion-scale network in the literature.
arXiv Detail & Related papers (2021-04-21T14:51:33Z) - Co-embedding of Nodes and Edges with Graph Neural Networks [13.020745622327894]
Graph embedding is a way to transform and encode the data structure in high dimensional and non-Euclidean feature space.
CensNet is a general graph embedding framework, which embeds both nodes and edges to a latent feature space.
Our approach achieves or matches the state-of-the-art performance in four graph learning tasks.
arXiv Detail & Related papers (2020-10-25T22:39:31Z) - Graph Contrastive Learning with Augmentations [109.23158429991298]
We propose a graph contrastive learning (GraphCL) framework for learning unsupervised representations of graph data.
We show that our framework can produce graph representations of similar or better generalizability, transferrability, and robustness compared to state-of-the-art methods.
arXiv Detail & Related papers (2020-10-22T20:13:43Z) - Scaling Graph Neural Networks with Approximate PageRank [64.92311737049054]
We present the PPRGo model which utilizes an efficient approximation of information diffusion in GNNs.
In addition to being faster, PPRGo is inherently scalable, and can be trivially parallelized for large datasets like those found in industry settings.
We show that training PPRGo and predicting labels for all nodes in this graph takes under 2 minutes on a single machine, far outpacing other baselines on the same graph.
arXiv Detail & Related papers (2020-07-03T09:30:07Z) - GCC: Graph Contrastive Coding for Graph Neural Network Pre-Training [62.73470368851127]
Graph representation learning has emerged as a powerful technique for addressing real-world problems.
We design Graph Contrastive Coding -- a self-supervised graph neural network pre-training framework.
We conduct experiments on three graph learning tasks and ten graph datasets.
arXiv Detail & Related papers (2020-06-17T16:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.