FedGT: Federated Node Classification with Scalable Graph Transformer
- URL: http://arxiv.org/abs/2401.15203v1
- Date: Fri, 26 Jan 2024 21:02:36 GMT
- Title: FedGT: Federated Node Classification with Scalable Graph Transformer
- Authors: Zaixi Zhang, Qingyong Hu, Yang Yu, Weibo Gao, Qi Liu
- Abstract summary: We propose a scalable textbfFederated textbfGraph textbfTransformer (textbfFedGT) in the paper.
FedGT computes clients' similarity based on the aligned global nodes with optimal transport.
- Score: 27.50698154862779
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graphs are widely used to model relational data. As graphs are getting larger
and larger in real-world scenarios, there is a trend to store and compute
subgraphs in multiple local systems. For example, recently proposed
\emph{subgraph federated learning} methods train Graph Neural Networks (GNNs)
distributively on local subgraphs and aggregate GNN parameters with a central
server. However, existing methods have the following limitations: (1) The links
between local subgraphs are missing in subgraph federated learning. This could
severely damage the performance of GNNs that follow message-passing paradigms
to update node/edge features. (2) Most existing methods overlook the subgraph
heterogeneity issue, brought by subgraphs being from different parts of the
whole graph. To address the aforementioned challenges, we propose a scalable
\textbf{Fed}erated \textbf{G}raph \textbf{T}ransformer (\textbf{FedGT}) in the
paper. Firstly, we design a hybrid attention scheme to reduce the complexity of
the Graph Transformer to linear while ensuring a global receptive field with
theoretical bounds. Specifically, each node attends to the sampled local
neighbors and a set of curated global nodes to learn both local and global
information and be robust to missing links. The global nodes are dynamically
updated during training with an online clustering algorithm to capture the data
distribution of the corresponding local subgraph. Secondly, FedGT computes
clients' similarity based on the aligned global nodes with optimal transport.
The similarity is then used to perform weighted averaging for personalized
aggregation, which well addresses the data heterogeneity problem. Moreover,
local differential privacy is applied to further protect the privacy of
clients. Finally, extensive experimental results on 6 datasets and 2 subgraph
settings demonstrate the superiority of FedGT.
Related papers
- DA-MoE: Addressing Depth-Sensitivity in Graph-Level Analysis through Mixture of Experts [70.21017141742763]
Graph neural networks (GNNs) are gaining popularity for processing graph-structured data.
Existing methods generally use a fixed number of GNN layers to generate representations for all graphs.
We propose the depth adaptive mixture of expert (DA-MoE) method, which incorporates two main improvements to GNN.
arXiv Detail & Related papers (2024-11-05T11:46:27Z) - Federated Graph Learning with Structure Proxy Alignment [43.13100155569234]
Federated Graph Learning (FGL) aims to learn graph learning models over graph data distributed in multiple data owners.
We propose FedSpray, a novel FGL framework that learns local class-wise structure proxies in the latent space.
Our goal is to obtain the aligned structure proxies that can serve as reliable, unbiased neighboring information for node classification.
arXiv Detail & Related papers (2024-08-18T07:32:54Z) - DPGAN: A Dual-Path Generative Adversarial Network for Missing Data Imputation in Graphs [17.847551850315895]
This paper proposes a novel framework, called Dual-Pathrative Adversarial Network (DPGAN)
DPGAN can deal simultaneously with missing data and avoid over-smoothing problems.
Comprehensive experiments across various benchmark datasets substantiate that DPGAN consistently rivals, if not outperforms, existing state-of-the-art imputation algorithms.
arXiv Detail & Related papers (2024-04-26T05:26:10Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - Fused Gromov-Wasserstein Graph Mixup for Graph-level Classifications [44.15102300886821]
We propose a novel graph mixup algorithm called FGWMixup, which seeks a midpoint of source graphs in the Fused Gromov-Wasserstein metric space.
Experiments conducted on five datasets demonstrate that FGWMixup effectively improves the generalizability and robustness of GNNs.
arXiv Detail & Related papers (2023-06-28T07:00:12Z) - NodeFormer: A Scalable Graph Structure Learning Transformer for Node
Classification [70.51126383984555]
We introduce a novel all-pair message passing scheme for efficiently propagating node signals between arbitrary nodes.
The efficient computation is enabled by a kernerlized Gumbel-Softmax operator.
Experiments demonstrate the promising efficacy of the method in various tasks including node classification on graphs.
arXiv Detail & Related papers (2023-06-14T09:21:15Z) - Personalized Subgraph Federated Learning [56.52903162729729]
We introduce a new subgraph FL problem, personalized subgraph FL, which focuses on the joint improvement of the interrelated local GNNs.
We propose a novel framework, FEDerated Personalized sUBgraph learning (FED-PUB), to tackle it.
We validate our FED-PUB for its subgraph FL performance on six datasets, considering both non-overlapping and overlapping subgraphs.
arXiv Detail & Related papers (2022-06-21T09:02:53Z) - Deep Graph-level Anomaly Detection by Glocal Knowledge Distillation [61.39364567221311]
Graph-level anomaly detection (GAD) describes the problem of detecting graphs that are abnormal in their structure and/or the features of their nodes.
One of the challenges in GAD is to devise graph representations that enable the detection of both locally- and globally-anomalous graphs.
We introduce a novel deep anomaly detection approach for GAD that learns rich global and local normal pattern information by joint random distillation of graph and node representations.
arXiv Detail & Related papers (2021-12-19T05:04:53Z) - Imbalanced Graph Classification via Graph-of-Graph Neural Networks [16.589373163769853]
Graph Neural Networks (GNNs) have achieved unprecedented success in learning graph representations to identify categorical labels of graphs.
We introduce a novel framework, Graph-of-Graph Neural Networks (G$2$GNN), which alleviates the graph imbalance issue by deriving extra supervision globally from neighboring graphs and locally from graphs themselves.
Our proposed G$2$GNN outperforms numerous baselines by roughly 5% in both F1-macro and F1-micro scores.
arXiv Detail & Related papers (2021-12-01T02:25:47Z) - Local Augmentation for Graph Neural Networks [78.48812244668017]
We introduce the local augmentation, which enhances node features by its local subgraph structures.
Based on the local augmentation, we further design a novel framework: LA-GNN, which can apply to any GNN models in a plug-and-play manner.
arXiv Detail & Related papers (2021-09-08T18:10:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.