User-Centric Federated Learning: Trading off Wireless Resources for
Personalization
- URL: http://arxiv.org/abs/2304.12930v1
- Date: Tue, 25 Apr 2023 15:45:37 GMT
- Title: User-Centric Federated Learning: Trading off Wireless Resources for
Personalization
- Authors: Mohamad Mestoukirdi, Matteo Zecchin, David Gesbert, Qianrui Li
- Abstract summary: In Federated Learning (FL) systems, Statistical Heterogeneousness increases the algorithm convergence time and reduces the generalization performance.
To tackle the above problems without violating the privacy constraints that FL imposes, personalized FL methods have to couple statistically similar clients without directly accessing their data.
In this work, we design user-centric aggregation rules that are based on readily available gradient information and are capable of producing personalized models for each FL client.
Our algorithm outperforms popular personalized FL baselines in terms of average accuracy, worst node performance, and training communication overhead.
- Score: 18.38078866145659
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Statistical heterogeneity across clients in a Federated Learning (FL) system
increases the algorithm convergence time and reduces the generalization
performance, resulting in a large communication overhead in return for a poor
model. To tackle the above problems without violating the privacy constraints
that FL imposes, personalized FL methods have to couple statistically similar
clients without directly accessing their data in order to guarantee a
privacy-preserving transfer. In this work, we design user-centric aggregation
rules at the parameter server (PS) that are based on readily available gradient
information and are capable of producing personalized models for each FL
client. The proposed aggregation rules are inspired by an upper bound of the
weighted aggregate empirical risk minimizer. Secondly, we derive a
communication-efficient variant based on user clustering which greatly enhances
its applicability to communication-constrained systems. Our algorithm
outperforms popular personalized FL baselines in terms of average accuracy,
worst node performance, and training communication overhead.
Related papers
- An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Utilizing Free Clients in Federated Learning for Focused Model
Enhancement [9.370655190768163]
Federated Learning (FL) is a distributed machine learning approach to learn models on decentralized heterogeneous data.
We present FedALIGN (Federated Adaptive Learning with Inclusion of Global Needs) to address this challenge.
arXiv Detail & Related papers (2023-10-06T18:23:40Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - FedFOR: Stateless Heterogeneous Federated Learning with First-Order
Regularization [24.32029125031383]
Federated Learning (FL) seeks to distribute model training across local clients without collecting data in a centralized data-center.
We propose a first-order approximation of the global data distribution into local objectives, which intuitively penalizes updates in the opposite direction of the global update.
Our approach does not impose unrealistic limits on the client size, enabling learning from a large number of clients as is typical in most FL applications.
arXiv Detail & Related papers (2022-09-21T17:57:20Z) - Sparse Federated Learning with Hierarchical Personalized Models [24.763028713043468]
Federated learning (FL) can achieve privacy-safe and reliable collaborative training without collecting users' private data.
We propose a personalized FL algorithm using a hierarchical proximal mapping based on the moreau envelop, named sparse federated learning with hierarchical personalized models (sFedHP)
A continuously differentiable approximated L1-norm is also used as the sparse constraint to reduce the communication cost.
arXiv Detail & Related papers (2022-03-25T09:06:42Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Gradient Masked Averaging for Federated Learning [24.687254139644736]
Federated learning allows a large number of clients with heterogeneous data to coordinate learning of a unified global model.
Standard FL algorithms involve averaging of model parameters or gradient updates to approximate the global model at the server.
We propose a gradient masked averaging approach for FL as an alternative to the standard averaging of client updates.
arXiv Detail & Related papers (2022-01-28T08:42:43Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Personalized Federated Learning via Maximizing Correlation with Sparse
and Hierarchical Extensions [14.862798952297105]
Federated Learning (FL) is a collaborative machine learning technique to train a global model without obtaining clients' private data.
We propose a novel personalized federated learning via maximizing correlation pFedMac.
We show that pFedMac performs better than the L2-norm distance based personalization methods.
arXiv Detail & Related papers (2021-07-12T11:43:40Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.