Rethinking Personalized Federated Learning with Clustering-based Dynamic
Graph Propagation
- URL: http://arxiv.org/abs/2401.15874v1
- Date: Mon, 29 Jan 2024 04:14:02 GMT
- Title: Rethinking Personalized Federated Learning with Clustering-based Dynamic
Graph Propagation
- Authors: Jiaqi Wang, Yuzhong Chen, Yuhang Wu, Mahashweta Das, Hao Yang,
Fenglong Ma
- Abstract summary: We propose a simple yet effective personalized federated learning framework.
We group clients into multiple clusters based on their model training status and data distribution on the server side.
We conduct experiments on three image benchmark datasets and create synthetic structured datasets with three types of typologies.
- Score: 48.08348593449897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Most existing personalized federated learning approaches are based on
intricate designs, which often require complex implementation and tuning. In
order to address this limitation, we propose a simple yet effective
personalized federated learning framework. Specifically, during each
communication round, we group clients into multiple clusters based on their
model training status and data distribution on the server side. We then
consider each cluster center as a node equipped with model parameters and
construct a graph that connects these nodes using weighted edges. Additionally,
we update the model parameters at each node by propagating information across
the entire graph. Subsequently, we design a precise personalized model
distribution strategy to allow clients to obtain the most suitable model from
the server side. We conduct experiments on three image benchmark datasets and
create synthetic structured datasets with three types of typologies.
Experimental results demonstrate the effectiveness of the proposed work.
Related papers
- GraphFM: A Scalable Framework for Multi-Graph Pretraining [2.882104808886318]
We introduce a scalable multi-graph multi-task pretraining approach specifically tailored for node classification tasks across diverse graph datasets from different domains.
We demonstrate the efficacy of our approach by training a model on 152 different graph datasets comprising over 7.4 million nodes and 189 million edges.
Our results show that pretraining on a diverse array of real and synthetic graphs improves the model's adaptability and stability, while performing competitively with state-of-the-art specialist models.
arXiv Detail & Related papers (2024-07-16T16:51:43Z) - FedSheafHN: Personalized Federated Learning on Graph-structured Data [22.825083541211168]
We propose a model called FedSheafHN, which embeds each client's local subgraph into a server-constructed collaboration graph.
Our model improves the integration and interpretation of complex client characteristics.
It also has fast model convergence and effective new clients generalization.
arXiv Detail & Related papers (2024-05-25T04:51:41Z) - Learn What You Need in Personalized Federated Learning [53.83081622573734]
$textitLearn2pFed$ is a novel algorithm-unrolling-based personalized federated learning framework.
We show that $textitLearn2pFed$ significantly outperforms previous personalized federated learning methods.
arXiv Detail & Related papers (2024-01-16T12:45:15Z) - Cross-Silo Federated Learning Across Divergent Domains with Iterative Parameter Alignment [4.95475852994362]
Federated learning is a method for training a machine learning model across remote clients.
We reformulate the typical federated learning setup to learn N models optimized for a common objective.
We find that the technique achieves competitive results on a variety of data partitions compared to state-of-the-art approaches.
arXiv Detail & Related papers (2023-11-08T16:42:14Z) - GrannGAN: Graph annotation generative adversarial networks [72.66289932625742]
We consider the problem of modelling high-dimensional distributions and generating new examples of data with complex relational feature structure coherent with a graph skeleton.
The model we propose tackles the problem of generating the data features constrained by the specific graph structure of each data point by splitting the task into two phases.
In the first it models the distribution of features associated with the nodes of the given graph, in the second it complements the edge features conditionally on the node features.
arXiv Detail & Related papers (2022-12-01T11:49:07Z) - Efficient Automatic Machine Learning via Design Graphs [72.85976749396745]
We propose FALCON, an efficient sample-based method to search for the optimal model design.
FALCON features 1) a task-agnostic module, which performs message passing on the design graph via a Graph Neural Network (GNN), and 2) a task-specific module, which conducts label propagation of the known model performance information.
We empirically show that FALCON can efficiently obtain the well-performing designs for each task using only 30 explored nodes.
arXiv Detail & Related papers (2022-10-21T21:25:59Z) - Graph-Assisted Communication-Efficient Ensemble Federated Learning [12.538755088321404]
Communication efficiency arises as a necessity in federated learning due to limited communication bandwidth.
Server selects a subset of pre-trained models to construct the ensemble model based on the structure of a graph.
Only the selected models are transmitted to the clients, such that certain budget constraints are not violated.
arXiv Detail & Related papers (2022-02-27T20:25:44Z) - Improving Label Quality by Jointly Modeling Items and Annotators [68.8204255655161]
We propose a fully Bayesian framework for learning ground truth labels from noisy annotators.
Our framework ensures scalability by factoring a generative, Bayesian soft clustering model over label distributions into the classic David and Skene joint annotator-data model.
arXiv Detail & Related papers (2021-06-20T02:15:20Z) - Personalized Federated Learning by Structured and Unstructured Pruning
under Data Heterogeneity [3.291862617649511]
We propose a new approach for obtaining a personalized model from a client-level objective.
To realize this personalization, we leverage finding a small subnetwork for each client.
arXiv Detail & Related papers (2021-05-02T22:10:46Z) - Pre-Trained Models for Heterogeneous Information Networks [57.78194356302626]
We propose a self-supervised pre-training and fine-tuning framework, PF-HIN, to capture the features of a heterogeneous information network.
PF-HIN consistently and significantly outperforms state-of-the-art alternatives on each of these tasks, on four datasets.
arXiv Detail & Related papers (2020-07-07T03:36:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.