FederatedScope-GNN: Towards a Unified, Comprehensive and Efficient
Package for Federated Graph Learning
- URL: http://arxiv.org/abs/2204.05562v3
- Date: Thu, 14 Apr 2022 03:51:30 GMT
- Title: FederatedScope-GNN: Towards a Unified, Comprehensive and Efficient
Package for Federated Graph Learning
- Authors: Zhen Wang, Weirui Kuang, Yuexiang Xie, Liuyi Yao, Yaliang Li, Bolin
Ding, Jingren Zhou
- Abstract summary: Federated graph learning (FGL) has not been well supported due to its unique characteristics and requirements.
We first discuss the challenges in creating an easy-to-use FGL package and accordingly present our implemented package FederatedScope-GNN (FS-G)
We validate the effectiveness of FS-G by conducting extensive experiments, which simultaneously gains many valuable insights about FGL for the community.
- Score: 65.48760613529033
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The incredible development of federated learning (FL) has benefited various
tasks in the domains of computer vision and natural language processing, and
the existing frameworks such as TFF and FATE has made the deployment easy in
real-world applications. However, federated graph learning (FGL), even though
graph data are prevalent, has not been well supported due to its unique
characteristics and requirements. The lack of FGL-related framework increases
the efforts for accomplishing reproducible research and deploying in real-world
applications. Motivated by such strong demand, in this paper, we first discuss
the challenges in creating an easy-to-use FGL package and accordingly present
our implemented package FederatedScope-GNN (FS-G), which provides (1) a unified
view for modularizing and expressing FGL algorithms; (2) comprehensive DataZoo
and ModelZoo for out-of-the-box FGL capability; (3) an efficient model
auto-tuning component; and (4) off-the-shelf privacy attack and defense
abilities. We validate the effectiveness of FS-G by conducting extensive
experiments, which simultaneously gains many valuable insights about FGL for
the community. Moreover, we employ FS-G to serve the FGL application in
real-world E-commerce scenarios, where the attained improvements indicate great
potential business benefits. We publicly release FS-G, as submodules of
FederatedScope, at https://github.com/alibaba/FederatedScope to promote FGL's
research and enable broad applications that would otherwise be infeasible due
to the lack of a dedicated package.
Related papers
- FedSSP: Federated Graph Learning with Spectral Knowledge and Personalized Preference [31.796411806840087]
Federated Graph Learning (pFGL) facilitates the decentralized training of Graph Neural Networks (GNNs) without compromising privacy.
Previous pFGL methods incorrectly share non-generic knowledge globally and fail to tailor personalized solutions locally.
We propose our pFGL framework FedSSP which Shares generic Spectral knowledge while satisfying graph Preferences.
arXiv Detail & Related papers (2024-10-26T07:09:27Z) - OpenFGL: A Comprehensive Benchmarks for Federated Graph Learning [36.04858706246336]
Federated graph learning (FGL) has emerged as a promising distributed training paradigm for graph neural networks across multiple local systems without direct data sharing.
Despite the proliferation of FGL, the diverse motivations from practical applications, spanning various research backgrounds and experimental settings, pose a significant challenge to fair evaluation.
We propose OpenFGL, a unified benchmark designed for the primary FGL scenarios: Graph-FL and Subgraph-FL.
arXiv Detail & Related papers (2024-08-29T06:40:01Z) - SpreadFGL: Edge-Client Collaborative Federated Graph Learning with Adaptive Neighbor Generation [16.599474223790843]
Federated Graph Learning (FGL) has garnered widespread attention by enabling collaborative training on multiple clients for classification tasks.
We propose a novel FGL framework, named SpreadFGL, to promote the information flow in edge-client collaboration.
We show that SpreadFGL achieves higher accuracy and faster convergence against state-of-the-art algorithms.
arXiv Detail & Related papers (2024-07-14T09:34:19Z) - HiFGL: A Hierarchical Framework for Cross-silo Cross-device Federated Graph Learning [12.073150043485084]
Federated Graph Learning (FGL) has emerged as a promising way to learn high-quality representations from distributed graph data.
We propose a Hierarchical Federated Graph Learning framework for cross-silo cross-device FGL.
Specifically, we devise a unified hierarchical architecture to safeguard federated GNN training on heterogeneous clients.
arXiv Detail & Related papers (2024-06-15T12:34:40Z) - FedGTA: Topology-aware Averaging for Federated Graph Learning [44.11777886421429]
Federated Graph Learning (FGL) is a distributed machine learning paradigm that enables collaborative training on large-scale subgraphs.
Most FGL optimization strategies ignore graph structure, presenting dissatisfied performance and slow convergence.
We propose Federated Graph Topology-aware Aggregation (FedGTA), a personalized optimization strategy that optimize through topology-aware local smoothing confidence and mixed neighbor features.
arXiv Detail & Related papers (2024-01-22T08:31:53Z) - Unlocking the Potential of Prompt-Tuning in Bridging Generalized and
Personalized Federated Learning [49.72857433721424]
Vision Transformers (ViT) and Visual Prompt Tuning (VPT) achieve state-of-the-art performance with improved efficiency in various computer vision tasks.
We present a novel algorithm, SGPT, that integrates Generalized FL (GFL) and Personalized FL (PFL) approaches by employing a unique combination of both shared and group-specific prompts.
arXiv Detail & Related papers (2023-10-27T17:22:09Z) - FederatedScope-LLM: A Comprehensive Package for Fine-tuning Large
Language Models in Federated Learning [70.38817963253034]
This paper first discusses these challenges of federated fine-tuning LLMs, and introduces our package FS-LLM as a main contribution.
We provide comprehensive federated parameter-efficient fine-tuning algorithm implementations and versatile programming interfaces for future extension in FL scenarios.
We conduct extensive experiments to validate the effectiveness of FS-LLM and benchmark advanced LLMs with state-of-the-art parameter-efficient fine-tuning algorithms in FL settings.
arXiv Detail & Related papers (2023-09-01T09:40:36Z) - FederatedScope: A Comprehensive and Flexible Federated Learning Platform
via Message Passing [63.87056362712879]
We propose a novel and comprehensive federated learning platform, named FederatedScope, which is based on a message-oriented framework.
Compared to the procedural framework, the proposed message-oriented framework is more flexible to express heterogeneous message exchange.
We conduct a series of experiments on the provided easy-to-use and comprehensive FL benchmarks to validate the correctness and efficiency of FederatedScope.
arXiv Detail & Related papers (2022-04-11T11:24:21Z) - Fine-tuning Global Model via Data-Free Knowledge Distillation for
Non-IID Federated Learning [86.59588262014456]
Federated Learning (FL) is an emerging distributed learning paradigm under privacy constraint.
We propose a data-free knowledge distillation method to fine-tune the global model in the server (FedFTG)
Our FedFTG significantly outperforms the state-of-the-art (SOTA) FL algorithms and can serve as a strong plugin for enhancing FedAvg, FedProx, FedDyn, and SCAFFOLD.
arXiv Detail & Related papers (2022-03-17T11:18:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.