M3FGM:a node masking and multi-granularity message passing-based
federated graph model for spatial-temporal data prediction
- URL: http://arxiv.org/abs/2210.16193v3
- Date: Thu, 7 Sep 2023 08:57:10 GMT
- Title: M3FGM:a node masking and multi-granularity message passing-based
federated graph model for spatial-temporal data prediction
- Authors: Yuxing Tian, Zheng Liu, Yanwen Qu, Song Li, Jiachi Luo
- Abstract summary: This paper proposes a new GNN-oriented split federated learning method, named node bfseries Masking and bfseries Multi-granularity bfseries Message passing-based Federated Graph Model (M$3$FGM)
For the first issue, the server model of M$3$FGM employs a MaskNode layer to simulate the case of clients being offline.
We also the decoder of the client model using a dual-sub-decoders structure so that each client model can use its local data to predict independently when offline
- Score: 6.9141842767826605
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Researchers are solving the challenges of spatial-temporal prediction by
combining Federated Learning (FL) and graph models with respect to the
constrain of privacy and security. In order to make better use of the power of
graph model, some researchs also combine split learning(SL). However, there are
still several issues left unattended: 1) Clients might not be able to access
the server during inference phase; 2) The graph of clients designed manually in
the server model may not reveal the proper relationship between clients. This
paper proposes a new GNN-oriented split federated learning method, named node
{\bfseries M}asking and {\bfseries M}ulti-granularity {\bfseries M}essage
passing-based Federated Graph Model (M$^3$FGM) for the above issues. For the
first issue, the server model of M$^3$FGM employs a MaskNode layer to simulate
the case of clients being offline. We also redesign the decoder of the client
model using a dual-sub-decoders structure so that each client model can use its
local data to predict independently when offline. As for the second issue, a
new GNN layer named Multi-Granularity Message Passing (MGMP) layer enables each
client node to perceive global and local information. We conducted extensive
experiments in two different scenarios on two real traffic datasets. Results
show that M$^3$FGM outperforms the baselines and variant models, achieves the
best results in both datasets and scenarios.
Related papers
- Personalized federated learning based on feature fusion [2.943623084019036]
Federated learning enables distributed clients to collaborate on training while storing their data locally to protect client privacy.
We propose a personalized federated learning approach called pFedPM.
In our process, we replace traditional gradient uploading with feature uploading, which helps reduce communication costs and allows for heterogeneous client models.
arXiv Detail & Related papers (2024-06-24T12:16:51Z) - FedSheafHN: Personalized Federated Learning on Graph-structured Data [22.825083541211168]
We propose a model called FedSheafHN, which embeds each client's local subgraph into a server-constructed collaboration graph.
Our model improves the integration and interpretation of complex client characteristics.
It also has fast model convergence and effective new clients generalization.
arXiv Detail & Related papers (2024-05-25T04:51:41Z) - FedGT: Federated Node Classification with Scalable Graph Transformer [27.50698154862779]
We propose a scalable textbfFederated textbfGraph textbfTransformer (textbfFedGT) in the paper.
FedGT computes clients' similarity based on the aligned global nodes with optimal transport.
arXiv Detail & Related papers (2024-01-26T21:02:36Z) - Replica Tree-based Federated Learning using Limited Data [6.572149681197959]
In this work, we propose a novel federated learning framework, named RepTreeFL.
At the core of the solution is the concept of a replica, where we replicate each participating client by copying its model architecture and perturbing its local data distribution.
Our approach enables learning from limited data and a small number of clients by aggregating a larger number of models with diverse data distributions.
arXiv Detail & Related papers (2023-12-28T17:47:25Z) - FedRA: A Random Allocation Strategy for Federated Tuning to Unleash the
Power of Heterogeneous Clients [50.13097183691517]
In real-world federated scenarios, there often exist a multitude of heterogeneous clients with varying computation and communication resources.
We propose a novel federated tuning algorithm, FedRA.
In each communication round, FedRA randomly generates an allocation matrix.
It reorganizes a small number of layers from the original model based on the allocation matrix and fine-tunes using adapters.
arXiv Detail & Related papers (2023-11-19T04:43:16Z) - Prototype Helps Federated Learning: Towards Faster Convergence [38.517903009319994]
Federated learning (FL) is a distributed machine learning technique in which multiple clients cooperate to train a shared model without exchanging their raw data.
In this paper, a prototype-based federated learning framework is proposed, which can achieve better inference performance with only a few changes to the last global iteration of the typical federated learning process.
arXiv Detail & Related papers (2023-03-22T04:06:29Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - MGAE: Masked Autoencoders for Self-Supervised Learning on Graphs [55.66953093401889]
Masked graph autoencoder (MGAE) framework to perform effective learning on graph structure data.
Taking insights from self-supervised learning, we randomly mask a large proportion of edges and try to reconstruct these missing edges during training.
arXiv Detail & Related papers (2022-01-07T16:48:07Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Federated Unsupervised Representation Learning [56.715917111878106]
We formulate a new problem in federated learning called Federated Unsupervised Representation Learning (FURL) to learn a common representation model without supervision.
FedCA is composed of two key modules: dictionary module to aggregate the representations of samples from each client and share with all clients for consistency of representation space and alignment module to align the representation of each client on a base model trained on a public data.
arXiv Detail & Related papers (2020-10-18T13:28:30Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.