FedGTA: Topology-aware Averaging for Federated Graph Learning
- URL: http://arxiv.org/abs/2401.11755v1
- Date: Mon, 22 Jan 2024 08:31:53 GMT
- Title: FedGTA: Topology-aware Averaging for Federated Graph Learning
- Authors: Xunkai Li, Zhengyu Wu, Wentao Zhang, Yinlin Zhu, Rong-Hua Li, Guoren
Wang
- Abstract summary: Federated Graph Learning (FGL) is a distributed machine learning paradigm that enables collaborative training on large-scale subgraphs.
Most FGL optimization strategies ignore graph structure, presenting dissatisfied performance and slow convergence.
We propose Federated Graph Topology-aware Aggregation (FedGTA), a personalized optimization strategy that optimize through topology-aware local smoothing confidence and mixed neighbor features.
- Score: 44.11777886421429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Graph Learning (FGL) is a distributed machine learning paradigm
that enables collaborative training on large-scale subgraphs across multiple
local systems. Existing FGL studies fall into two categories: (i) FGL
Optimization, which improves multi-client training in existing machine learning
models; (ii) FGL Model, which enhances performance with complex local models
and multi-client interactions. However, most FGL optimization strategies are
designed specifically for the computer vision domain and ignore graph
structure, presenting dissatisfied performance and slow convergence. Meanwhile,
complex local model architectures in FGL Models studies lack scalability for
handling large-scale subgraphs and have deployment limitations. To address
these issues, we propose Federated Graph Topology-aware Aggregation (FedGTA), a
personalized optimization strategy that optimizes through topology-aware local
smoothing confidence and mixed neighbor features. During experiments, we deploy
FedGTA in 12 multi-scale real-world datasets with the Louvain and Metis split.
This allows us to evaluate the performance and robustness of FedGTA across a
range of scenarios. Extensive experiments demonstrate that FedGTA achieves
state-of-the-art performance while exhibiting high scalability and efficiency.
The experiment includes ogbn-papers100M, the most representative large-scale
graph database so that we can verify the applicability of our method to
large-scale graph learning. To the best of our knowledge, our study is the
first to bridge large-scale graph learning with FGL using this optimization
strategy, contributing to the development of efficient and scalable FGL
methods.
Related papers
- MTLSO: A Multi-Task Learning Approach for Logic Synthesis Optimization [19.13500546022262]
MTLSO is a Multi-Task Learning approach for Logic Synthesis Optimization.
We introduce an auxiliary task of binary multi-label graph classification alongside the primary regression task.
We also employ a hierarchical graph representation learning strategy to improve the model's capacity for learning expressive graph-level representations.
arXiv Detail & Related papers (2024-09-09T21:20:36Z) - OpenFGL: A Comprehensive Benchmarks for Federated Graph Learning [36.04858706246336]
Federated graph learning (FGL) has emerged as a promising distributed training paradigm for graph neural networks across multiple local systems without direct data sharing.
Despite the proliferation of FGL, the diverse motivations from practical applications, spanning various research backgrounds and experimental settings, pose a significant challenge to fair evaluation.
We propose OpenFGL, a unified benchmark designed for the primary FGL scenarios: Graph-FL and Subgraph-FL.
arXiv Detail & Related papers (2024-08-29T06:40:01Z) - SpreadFGL: Edge-Client Collaborative Federated Graph Learning with Adaptive Neighbor Generation [16.599474223790843]
Federated Graph Learning (FGL) has garnered widespread attention by enabling collaborative training on multiple clients for classification tasks.
We propose a novel FGL framework, named SpreadFGL, to promote the information flow in edge-client collaboration.
We show that SpreadFGL achieves higher accuracy and faster convergence against state-of-the-art algorithms.
arXiv Detail & Related papers (2024-07-14T09:34:19Z) - Amplify Graph Learning for Recommendation via Sparsity Completion [16.32861024767423]
Graph learning models have been widely deployed in collaborative filtering (CF) based recommendation systems.
Due to the issue of data sparsity, the graph structure of the original input lacks potential positive preference edges.
We propose an Amplify Graph Learning framework based on Sparsity Completion (called AGL-SC)
arXiv Detail & Related papers (2024-06-27T08:26:20Z) - FedBone: Towards Large-Scale Federated Multi-Task Learning [13.835972363413884]
In real-world applications, visual and natural language tasks typically require large-scale models to extract high-level abstract features.
Existing HFML methods disregard the impact of gradient conflicts on multi-task optimization.
We propose an innovative framework called FedBone, which enables the construction of large-scale models with better generalization.
arXiv Detail & Related papers (2023-06-30T08:19:38Z) - Learning Strong Graph Neural Networks with Weak Information [64.64996100343602]
We develop a principled approach to the problem of graph learning with weak information (GLWI)
We propose D$2$PT, a dual-channel GNN framework that performs long-range information propagation on the input graph with incomplete structure, but also on a global graph that encodes global semantic similarities.
arXiv Detail & Related papers (2023-05-29T04:51:09Z) - FederatedScope-GNN: Towards a Unified, Comprehensive and Efficient
Package for Federated Graph Learning [65.48760613529033]
Federated graph learning (FGL) has not been well supported due to its unique characteristics and requirements.
We first discuss the challenges in creating an easy-to-use FGL package and accordingly present our implemented package FederatedScope-GNN (FS-G)
We validate the effectiveness of FS-G by conducting extensive experiments, which simultaneously gains many valuable insights about FGL for the community.
arXiv Detail & Related papers (2022-04-12T06:48:06Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z) - Robust Optimization as Data Augmentation for Large-scale Graphs [117.2376815614148]
We propose FLAG (Free Large-scale Adversarial Augmentation on Graphs), which iteratively augments node features with gradient-based adversarial perturbations during training.
FLAG is a general-purpose approach for graph data, which universally works in node classification, link prediction, and graph classification tasks.
arXiv Detail & Related papers (2020-10-19T21:51:47Z) - Iterative Deep Graph Learning for Graph Neural Networks: Better and
Robust Node Embeddings [53.58077686470096]
We propose an end-to-end graph learning framework, namely Iterative Deep Graph Learning (IDGL) for jointly and iteratively learning graph structure and graph embedding.
Our experiments show that our proposed IDGL models can consistently outperform or match the state-of-the-art baselines.
arXiv Detail & Related papers (2020-06-21T19:49:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.