Federated Learning Can Find Friends That Are Advantageous
- URL: http://arxiv.org/abs/2402.05050v4
- Date: Wed, 17 Jul 2024 08:49:30 GMT
- Title: Federated Learning Can Find Friends That Are Advantageous
- Authors: Nazarii Tupitsa, Samuel Horváth, Martin Takáč, Eduard Gorbunov,
- Abstract summary: In Federated Learning (FL), the distributed nature and heterogeneity of client data present both opportunities and challenges.
We introduce a novel algorithm that assigns adaptive aggregation weights to clients participating in FL training, identifying those with data distributions most conducive to a specific learning objective.
- Score: 14.993730469216546
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In Federated Learning (FL), the distributed nature and heterogeneity of client data present both opportunities and challenges. While collaboration among clients can significantly enhance the learning process, not all collaborations are beneficial; some may even be detrimental. In this study, we introduce a novel algorithm that assigns adaptive aggregation weights to clients participating in FL training, identifying those with data distributions most conducive to a specific learning objective. We demonstrate that our aggregation method converges no worse than the method that aggregates only the updates received from clients with the same data distribution. Furthermore, empirical evaluations consistently reveal that collaborations guided by our algorithm outperform traditional FL approaches. This underscores the critical role of judicious client selection and lays the foundation for more streamlined and effective FL implementations in the coming years.
Related papers
- Balancing Similarity and Complementarity for Federated Learning [91.65503655796603]
Federated Learning (FL) is increasingly important in mobile and IoT systems.
One key challenge in FL is managing statistical heterogeneity, such as non-i.i.d. data.
We introduce a novel framework, textttFedSaC, which balances similarity and complementarity in FL cooperation.
arXiv Detail & Related papers (2024-05-16T08:16:19Z) - How to Collaborate: Towards Maximizing the Generalization Performance in
Cross-Silo Federated Learning [12.86056968708516]
Federated clustering (FL) has vivid attention as a privacy-preserving distributed learning framework.
In this work, we focus on cross-silo FL, where clients become the model owners after FL data.
We formulate that the performance of a client can be improved only by collaborating with other clients that have more training data.
arXiv Detail & Related papers (2024-01-24T05:41:34Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Optimizing the Collaboration Structure in Cross-Silo Federated Learning [43.388911479025225]
In federated learning (FL), multiple clients collaborate to train machine learning models together.
We propose FedCollab, a novel FL framework that alleviates negative transfer by clustering clients into non-overlapping coalitions.
Our results demonstrate that FedCollab effectively mitigates negative transfer across a wide range of FL algorithms and consistently outperforms other clustered FL algorithms.
arXiv Detail & Related papers (2023-06-10T18:59:50Z) - When Do Curricula Work in Federated Learning? [56.88941905240137]
We find that curriculum learning largely alleviates non-IIDness.
The more disparate the data distributions across clients the more they benefit from learning.
We propose a novel client selection technique that benefits from the real-world disparity in the clients.
arXiv Detail & Related papers (2022-12-24T11:02:35Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - A Fair Federated Learning Framework With Reinforcement Learning [23.675056844328]
Federated learning (FL) is a paradigm where many clients collaboratively train a model under the coordination of a central server.
We propose a reinforcement learning framework, called PG-FFL, which automatically learns a policy to assign aggregation weights to clients.
We conduct extensive experiments over diverse datasets to verify the effectiveness of our framework.
arXiv Detail & Related papers (2022-05-26T15:10:16Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z) - Distributed Unsupervised Visual Representation Learning with Fused
Features [13.935997509072669]
Federated learning (FL) enables distributed clients to learn a shared model for prediction while keeping the training data local on each client.
We propose a federated contrastive learning framework consisting of two approaches: feature fusion and neighborhood matching.
It outperforms other methods by 11% on IID data and matches the performance of centralized learning.
arXiv Detail & Related papers (2021-11-21T08:36:31Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.