FedGrAINS: Personalized SubGraph Federated Learning with Adaptive Neighbor Sampling
- URL: http://arxiv.org/abs/2501.12592v2
- Date: Thu, 23 Jan 2025 13:03:43 GMT
- Title: FedGrAINS: Personalized SubGraph Federated Learning with Adaptive Neighbor Sampling
- Authors: Emir Ceyani, Han Xie, Baturalp Buyukates, Carl Yang, Salman Avestimehr,
- Abstract summary: We propose textitFedGrAINS, a novel data-adaptive and sampling-based regularization method for subgraph FL.<n>We show that the inclusion of textitFedGrAINS as a regularizer consistently improves the FL performance compared to baselines.
- Score: 36.314224807189575
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Graphs are crucial for modeling relational and biological data. As datasets grow larger in real-world scenarios, the risk of exposing sensitive information increases, making privacy-preserving training methods like federated learning (FL) essential to ensure data security and compliance with privacy regulations. Recently proposed personalized subgraph FL methods have become the de-facto standard for training personalized Graph Neural Networks (GNNs) in a federated manner while dealing with the missing links across clients' subgraphs due to privacy restrictions. However, personalized subgraph FL faces significant challenges due to the heterogeneity in client subgraphs, such as degree distributions among the nodes, which complicate federated training of graph models. To address these challenges, we propose \textit{FedGrAINS}, a novel data-adaptive and sampling-based regularization method for subgraph FL. FedGrAINS leverages generative flow networks (GFlowNets) to evaluate node importance concerning clients' tasks, dynamically adjusting the message-passing step in clients' GNNs. This adaptation reflects task-optimized sampling aligned with a trajectory balance objective. Experimental results demonstrate that the inclusion of \textit{FedGrAINS} as a regularizer consistently improves the FL performance compared to baselines that do not leverage such regularization.
Related papers
- FedHERO: A Federated Learning Approach for Node Classification Task on Heterophilic Graphs [55.51300642911766]
Federated Graph Learning (FGL) empowers clients to collaboratively train Graph neural networks (GNNs) in a distributed manner.
FGL methods usually require that the graph data owned by all clients is homophilic to ensure similar neighbor distribution patterns of nodes.
We propose FedHERO, an FGL framework designed to harness and share insights from heterophilic graphs effectively.
arXiv Detail & Related papers (2025-04-29T22:23:35Z) - FedAWA: Adaptive Optimization of Aggregation Weights in Federated Learning Using Client Vectors [50.131271229165165]
Federated Learning (FL) has emerged as a promising framework for distributed machine learning.
Data heterogeneity resulting from differences across user behaviors, preferences, and device characteristics poses a significant challenge for federated learning.
We propose Adaptive Weight Aggregation (FedAWA), a novel method that adaptively adjusts aggregation weights based on client vectors during the learning process.
arXiv Detail & Related papers (2025-03-20T04:49:40Z) - Robust Federated Learning in the Face of Covariate Shift: A Magnitude Pruning with Hybrid Regularization Framework for Enhanced Model Aggregation [1.519321208145928]
Federated Learning (FL) offers a promising framework for individuals aiming to collaboratively develop a shared model.
variations in data distribution among clients can profoundly affect FL methodologies, primarily due to instabilities in the aggregation process.
We propose a novel FL framework, combining individual parameter pruning and regularization techniques to improve the robustness of individual clients' models to aggregate.
arXiv Detail & Related papers (2024-12-19T16:22:37Z) - FedRGL: Robust Federated Graph Learning for Label Noise [5.296582539751589]
Federated Graph Learning (FGL) is a distributed machine learning paradigm based on graph neural networks.
We propose a robust federated graph learning method with label noise, termed FedRGL.
We show that FedRGL outperforms 12 baseline methods across various noise rates, types, and numbers of clients.
arXiv Detail & Related papers (2024-11-28T04:37:04Z) - Towards Federated Graph Learning in One-shot Communication [27.325478113745206]
Federated Graph Learning (FGL) has emerged as a promising paradigm for breaking data silos among distributed private graphs.<n>One-shot Federated Learning (OFL) enables collaboration in a single round, but existing OFL methods are ineffective for graph data.<n>We propose the first $textbfO-pFGL$ method ($textbfO-pFGL$) for node classification, compatible with Secure Aggregation protocols for privacy preservation.
arXiv Detail & Related papers (2024-11-18T05:59:29Z) - Federated Graph Learning with Adaptive Importance-based Sampling [22.601850857109024]
For privacy-preserving graph learning tasks involving distributed graph datasets, federated learning (FL)-based GCN (FedGCN) training is required.
Existing graph sampling-enhanced FedGCN training approaches ignore graph structural information or dynamics of optimization, resulting in high variance and inaccurate node embeddings.
We propose Federated Adaptive Importance-based Sampling (FedAIS) to address this limitation.
FedAIS achieves comparable or up to 3.23% higher test accuracy, while saving communication and computation costs by 91.77% and 85.59%.
arXiv Detail & Related papers (2024-09-23T01:49:20Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - FedEGG: Federated Learning with Explicit Global Guidance [90.04705121816185]
Federated Learning (FL) holds great potential for diverse applications owing to its privacy-preserving nature.
Existing methods help address these challenges via optimization-based client constraints, adaptive client selection, or the use of pre-trained models or synthetic data.
We present bftextFedEGG, a new FL algorithm that constructs a global guiding task using a well-defined, easy-to-converge learning task.
arXiv Detail & Related papers (2024-04-18T04:25:21Z) - Strategic Client Selection to Address Non-IIDness in HAPS-enabled FL
Networks [24.10349383347469]
This study introduces a client selection strategy tailored to address non-IIDness in client data distributions.
By strategically selecting clients whose data exhibit similar patterns for participation in FL training, our approach fosters a more uniform and representative data distribution.
Our simulations demonstrate that this targeted client selection methodology significantly reduces the training loss of FL models in HAPS networks.
arXiv Detail & Related papers (2024-01-10T18:22:00Z) - Chasing Fairness in Graphs: A GNN Architecture Perspective [73.43111851492593]
We propose textsfFair textsfMessage textsfPassing (FMP) designed within a unified optimization framework for graph neural networks (GNNs)
In FMP, the aggregation is first adopted to utilize neighbors' information and then the bias mitigation step explicitly pushes demographic group node presentation centers together.
Experiments on node classification tasks demonstrate that the proposed FMP outperforms several baselines in terms of fairness and accuracy on three real-world datasets.
arXiv Detail & Related papers (2023-12-19T18:00:15Z) - User-Centric Federated Learning: Trading off Wireless Resources for
Personalization [18.38078866145659]
In Federated Learning (FL) systems, Statistical Heterogeneousness increases the algorithm convergence time and reduces the generalization performance.
To tackle the above problems without violating the privacy constraints that FL imposes, personalized FL methods have to couple statistically similar clients without directly accessing their data.
In this work, we design user-centric aggregation rules that are based on readily available gradient information and are capable of producing personalized models for each FL client.
Our algorithm outperforms popular personalized FL baselines in terms of average accuracy, worst node performance, and training communication overhead.
arXiv Detail & Related papers (2023-04-25T15:45:37Z) - Graph Federated Learning for CIoT Devices in Smart Home Applications [23.216140264163535]
We propose a novel Graph Signal Processing (GSP)-inspired aggregation rule based on graph filtering dubbed G-Fedfilt''
The proposed aggregator enables a structured flow of information based on the graph's topology.
It is capable of yielding up to $2.41%$ higher accuracy than FedAvg in the case of testing the generalization of the models.
arXiv Detail & Related papers (2022-12-29T17:57:19Z) - Analyzing the Effect of Sampling in GNNs on Individual Fairness [79.28449844690566]
Graph neural network (GNN) based methods have saturated the field of recommender systems.
We extend an existing method for promoting individual fairness on graphs to support mini-batch, or sub-sample based, training of a GNN.
We show that mini-batch training facilitate individual fairness promotion by allowing for local nuance to guide the process of fairness promotion in representation learning.
arXiv Detail & Related papers (2022-09-08T16:20:25Z) - Personalized Subgraph Federated Learning [56.52903162729729]
We introduce a new subgraph FL problem, personalized subgraph FL, which focuses on the joint improvement of the interrelated local GNNs.
We propose a novel framework, FEDerated Personalized sUBgraph learning (FED-PUB), to tackle it.
We validate our FED-PUB for its subgraph FL performance on six datasets, considering both non-overlapping and overlapping subgraphs.
arXiv Detail & Related papers (2022-06-21T09:02:53Z) - Bayesian Graph Neural Networks with Adaptive Connection Sampling [62.51689735630133]
We propose a unified framework for adaptive connection sampling in graph neural networks (GNNs)
The proposed framework not only alleviates over-smoothing and over-fitting tendencies of deep GNNs, but also enables learning with uncertainty in graph analytic tasks with GNNs.
arXiv Detail & Related papers (2020-06-07T07:06:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.