M3FGM:a node masking and multi-granularity message passing-based
federated graph model for spatial-temporal data prediction
- URL: http://arxiv.org/abs/2210.16193v3
- Date: Thu, 7 Sep 2023 08:57:10 GMT
- Title: M3FGM:a node masking and multi-granularity message passing-based
federated graph model for spatial-temporal data prediction
- Authors: Yuxing Tian, Zheng Liu, Yanwen Qu, Song Li, Jiachi Luo
- Abstract summary: This paper proposes a new GNN-oriented split federated learning method, named node bfseries Masking and bfseries Multi-granularity bfseries Message passing-based Federated Graph Model (M$3$FGM)
For the first issue, the server model of M$3$FGM employs a MaskNode layer to simulate the case of clients being offline.
We also the decoder of the client model using a dual-sub-decoders structure so that each client model can use its local data to predict independently when offline
- Score: 6.9141842767826605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Researchers are solving the challenges of spatial-temporal prediction by
combining Federated Learning (FL) and graph models with respect to the
constrain of privacy and security. In order to make better use of the power of
graph model, some researchs also combine split learning(SL). However, there are
still several issues left unattended: 1) Clients might not be able to access
the server during inference phase; 2) The graph of clients designed manually in
the server model may not reveal the proper relationship between clients. This
paper proposes a new GNN-oriented split federated learning method, named node
{\bfseries M}asking and {\bfseries M}ulti-granularity {\bfseries M}essage
passing-based Federated Graph Model (M$^3$FGM) for the above issues. For the
first issue, the server model of M$^3$FGM employs a MaskNode layer to simulate
the case of clients being offline. We also redesign the decoder of the client
model using a dual-sub-decoders structure so that each client model can use its
local data to predict independently when offline. As for the second issue, a
new GNN layer named Multi-Granularity Message Passing (MGMP) layer enables each
client node to perceive global and local information. We conducted extensive
experiments in two different scenarios on two real traffic datasets. Results
show that M$^3$FGM outperforms the baselines and variant models, achieves the
best results in both datasets and scenarios.
Related papers
- FedHERO: A Federated Learning Approach for Node Classification Task on Heterophilic Graphs [55.51300642911766]
Federated Graph Learning (FGL) empowers clients to collaboratively train Graph neural networks (GNNs) in a distributed manner.
FGL methods usually require that the graph data owned by all clients is homophilic to ensure similar neighbor distribution patterns of nodes.
We propose FedHERO, an FGL framework designed to harness and share insights from heterophilic graphs effectively.
arXiv Detail & Related papers (2025-04-29T22:23:35Z) - Federated Prototype Graph Learning [33.38948169766356]
Federated Graph Learning (FGL) has gained significant attention for its distributed training capabilities.
FEMAIL: We propose FedPG as a general prototype-guided optimization method for the above multi-level FGL heterogeneity.
Experiments demonstrate that FedPG outperforms SOTA baselines by an average of 3.57% in accuracy while reducing communication costs by 168x.
arXiv Detail & Related papers (2025-04-13T09:21:21Z) - Data-centric Federated Graph Learning with Large Language Models [34.224475952206404]
In federated graph learning (FGL), a complete graph is divided into multiple subgraphs stored in each client due to privacy concerns.
A pain point of FGL is the heterogeneity problem, where nodes or structures present non-IID properties among clients.
We propose a general framework that innovatively decomposes the task of large language models for FGL into two sub-tasks theoretically.
arXiv Detail & Related papers (2025-03-25T08:43:08Z) - Federated Graph Learning with Graphless Clients [52.5629887481768]
Federated Graph Learning (FGL) is tasked with training machine learning models, such as Graph Neural Networks (GNNs)
We propose a novel framework FedGLS to tackle the problem in FGL with graphless clients.
arXiv Detail & Related papers (2024-11-13T06:54:05Z) - One Node Per User: Node-Level Federated Learning for Graph Neural Networks [7.428431479479646]
We propose a novel framework for node-level federated graph learning.
We introduce a graph Laplacian term based on the feature vector's latent representation to regulate the user-side model updates.
arXiv Detail & Related papers (2024-09-29T02:16:07Z) - Federated Graph Learning with Structure Proxy Alignment [43.13100155569234]
Federated Graph Learning (FGL) aims to learn graph learning models over graph data distributed in multiple data owners.
We propose FedSpray, a novel FGL framework that learns local class-wise structure proxies in the latent space.
Our goal is to obtain the aligned structure proxies that can serve as reliable, unbiased neighboring information for node classification.
arXiv Detail & Related papers (2024-08-18T07:32:54Z) - Personalized federated learning based on feature fusion [2.943623084019036]
Federated learning enables distributed clients to collaborate on training while storing their data locally to protect client privacy.
We propose a personalized federated learning approach called pFedPM.
In our process, we replace traditional gradient uploading with feature uploading, which helps reduce communication costs and allows for heterogeneous client models.
arXiv Detail & Related papers (2024-06-24T12:16:51Z) - FedGT: Federated Node Classification with Scalable Graph Transformer [27.50698154862779]
We propose a scalable textbfFederated textbfGraph textbfTransformer (textbfFedGT) in the paper.
FedGT computes clients' similarity based on the aligned global nodes with optimal transport.
arXiv Detail & Related papers (2024-01-26T21:02:36Z) - FedRA: A Random Allocation Strategy for Federated Tuning to Unleash the
Power of Heterogeneous Clients [50.13097183691517]
In real-world federated scenarios, there often exist a multitude of heterogeneous clients with varying computation and communication resources.
We propose a novel federated tuning algorithm, FedRA.
In each communication round, FedRA randomly generates an allocation matrix.
It reorganizes a small number of layers from the original model based on the allocation matrix and fine-tunes using adapters.
arXiv Detail & Related papers (2023-11-19T04:43:16Z) - Prototype Helps Federated Learning: Towards Faster Convergence [38.517903009319994]
Federated learning (FL) is a distributed machine learning technique in which multiple clients cooperate to train a shared model without exchanging their raw data.
In this paper, a prototype-based federated learning framework is proposed, which can achieve better inference performance with only a few changes to the last global iteration of the typical federated learning process.
arXiv Detail & Related papers (2023-03-22T04:06:29Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Mixed Graph Contrastive Network for Semi-Supervised Node Classification [63.924129159538076]
We propose a novel graph contrastive learning method, termed Mixed Graph Contrastive Network (MGCN)
In our method, we improve the discriminative capability of the latent embeddings by an unperturbed augmentation strategy and a correlation reduction mechanism.
By combining the two settings, we extract rich supervision information from both the abundant nodes and the rare yet valuable labeled nodes for discriminative representation learning.
arXiv Detail & Related papers (2022-06-06T14:26:34Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Federated Unsupervised Representation Learning [56.715917111878106]
We formulate a new problem in federated learning called Federated Unsupervised Representation Learning (FURL) to learn a common representation model without supervision.
FedCA is composed of two key modules: dictionary module to aggregate the representations of samples from each client and share with all clients for consistency of representation space and alignment module to align the representation of each client on a base model trained on a public data.
arXiv Detail & Related papers (2020-10-18T13:28:30Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.