FedGKD: Unleashing the Power of Collaboration in Federated Graph Neural
Networks
- URL: http://arxiv.org/abs/2309.09517v3
- Date: Thu, 21 Sep 2023 08:37:22 GMT
- Title: FedGKD: Unleashing the Power of Collaboration in Federated Graph Neural
Networks
- Authors: Qiying Pan, Ruofan Wu, Tengfei Liu, Tianyi Zhang, Yifei Zhu, Weiqiang
Wang
- Abstract summary: Federated training of Graph Neural Networks (GNN) has become popular in recent years due to its ability to perform graph-related tasks under data isolation scenarios.
graph heterogeneity issues in federated GNN systems continue to pose challenges.
We propose FedGKD, a novel federated GNN framework that utilizes a novel client-side graph dataset distillation method.
- Score: 40.5420021584431
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Federated training of Graph Neural Networks (GNN) has become popular in
recent years due to its ability to perform graph-related tasks under data
isolation scenarios while preserving data privacy. However, graph heterogeneity
issues in federated GNN systems continue to pose challenges. Existing
frameworks address the problem by representing local tasks using different
statistics and relating them through a simple aggregation mechanism. However,
these approaches suffer from limited efficiency from two aspects: low quality
of task-relatedness quantification and inefficacy of exploiting the
collaboration structure. To address these issues, we propose FedGKD, a novel
federated GNN framework that utilizes a novel client-side graph dataset
distillation method to extract task features that better describe
task-relatedness, and introduces a novel server-side aggregation mechanism that
is aware of the global collaboration structure. We conduct extensive
experiments on six real-world datasets of different scales, demonstrating our
framework's outperformance.
Related papers
- A Pure Transformer Pretraining Framework on Text-attributed Graphs [50.833130854272774]
We introduce a feature-centric pretraining perspective by treating graph structure as a prior.
Our framework, Graph Sequence Pretraining with Transformer (GSPT), samples node contexts through random walks.
GSPT can be easily adapted to both node classification and link prediction, demonstrating promising empirical success on various datasets.
arXiv Detail & Related papers (2024-06-19T22:30:08Z) - Cooperative Network Learning for Large-Scale and Decentralized Graphs [7.628975821850447]
We introduce a Cooperative Network Learning (CNL) framework to ensure secure graph computing for various graph tasks.
CNL unifies the local and global perspectives of GNN computing with distributed data for an agency.
We hope this framework will address privacy concerns in graph-related research and integrate decentralized graph data structures.
arXiv Detail & Related papers (2023-11-03T02:56:01Z) - Network Alignment with Transferable Graph Autoencoders [79.89704126746204]
We propose a novel graph autoencoder architecture designed to extract powerful and robust node embeddings.
We prove that the generated embeddings are associated with the eigenvalues and eigenvectors of the graphs.
Our proposed framework also leverages transfer learning and data augmentation to achieve efficient network alignment at a very large scale without retraining.
arXiv Detail & Related papers (2023-10-05T02:58:29Z) - Redundancy-Free Self-Supervised Relational Learning for Graph Clustering [13.176413653235311]
We propose a novel self-supervised deep graph clustering method named Redundancy-Free Graph Clustering (R$2$FGC)
It extracts the attribute- and structure-level relational information from both global and local views based on an autoencoder and a graph autoencoder.
Our experiments are performed on widely used benchmark datasets to validate the superiority of our R$2$FGC over state-of-the-art baselines.
arXiv Detail & Related papers (2023-09-09T06:18:50Z) - Over-Squashing in Graph Neural Networks: A Comprehensive survey [0.0]
This survey delves into the challenge of over-squashing in Graph Neural Networks (GNNs)
It comprehensively explores the causes, consequences, and mitigation strategies for over-squashing.
Various methodologies are reviewed, including graph rewiring, novel normalization, spectral analysis, and curvature-based strategies.
The survey also discusses the interplay between over-squashing and other GNN limitations, such as over-smoothing.
arXiv Detail & Related papers (2023-08-29T18:46:15Z) - Graph Transformer GANs for Graph-Constrained House Generation [223.739067413952]
We present a novel graph Transformer generative adversarial network (GTGAN) to learn effective graph node relations.
The GTGAN learns effective graph node relations in an end-to-end fashion for the challenging graph-constrained house generation task.
arXiv Detail & Related papers (2023-03-14T20:35:45Z) - Policy-GNN: Aggregation Optimization for Graph Neural Networks [60.50932472042379]
Graph neural networks (GNNs) aim to model the local graph structures and capture the hierarchical patterns by aggregating the information from neighbors.
It is a challenging task to develop an effective aggregation strategy for each node, given complex graphs and sparse features.
We propose Policy-GNN, a meta-policy framework that models the sampling procedure and message passing of GNNs into a combined learning process.
arXiv Detail & Related papers (2020-06-26T17:03:06Z) - Principal Neighbourhood Aggregation for Graph Nets [4.339839287869653]
Graph Neural Networks (GNNs) have been shown to be effective models for different predictive tasks on graph-structured data.
Recent work on their expressive power has focused on isomorphism tasks and countable feature spaces.
We extend this theoretical framework to include continuous features which occur regularly in real-world input domains.
arXiv Detail & Related papers (2020-04-12T23:30:00Z) - Tensor Graph Convolutional Networks for Multi-relational and Robust
Learning [74.05478502080658]
This paper introduces a tensor-graph convolutional network (TGCN) for scalable semi-supervised learning (SSL) from data associated with a collection of graphs, that are represented by a tensor.
The proposed architecture achieves markedly improved performance relative to standard GCNs, copes with state-of-the-art adversarial attacks, and leads to remarkable SSL performance over protein-to-protein interaction networks.
arXiv Detail & Related papers (2020-03-15T02:33:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.