Utilizing Free Clients in Federated Learning for Focused Model
Enhancement
- URL: http://arxiv.org/abs/2310.04515v1
- Date: Fri, 6 Oct 2023 18:23:40 GMT
- Title: Utilizing Free Clients in Federated Learning for Focused Model
Enhancement
- Authors: Aditya Narayan Ravi and Ilan Shomorony
- Abstract summary: Federated Learning (FL) is a distributed machine learning approach to learn models on decentralized heterogeneous data.
We present FedALIGN (Federated Adaptive Learning with Inclusion of Global Needs) to address this challenge.
- Score: 9.370655190768163
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a distributed machine learning approach to learn
models on decentralized heterogeneous data, without the need for clients to
share their data. Many existing FL approaches assume that all clients have
equal importance and construct a global objective based on all clients. We
consider a version of FL we call Prioritized FL, where the goal is to learn a
weighted mean objective of a subset of clients, designated as priority clients.
An important question arises: How do we choose and incentivize well aligned non
priority clients to participate in the federation, while discarding misaligned
clients? We present FedALIGN (Federated Adaptive Learning with Inclusion of
Global Needs) to address this challenge. The algorithm employs a matching
strategy that chooses non priority clients based on how similar the models loss
is on their data compared to the global data, thereby ensuring the use of non
priority client gradients only when it is beneficial for priority clients. This
approach ensures mutual benefits as non priority clients are motivated to join
when the model performs satisfactorily on their data, and priority clients can
utilize their updates and computational resources when their goals align. We
present a convergence analysis that quantifies the trade off between client
selection and speed of convergence. Our algorithm shows faster convergence and
higher test accuracy than baselines for various synthetic and benchmark
datasets.
Related papers
- Beyond the Federation: Topology-aware Federated Learning for Generalization to Unseen Clients [10.397502254316645]
Federated learning is widely employed to tackle distributed sensitive data.
Topology-aware Federated Learning (TFL) trains robust models against out-of-federation (OOF) data.
We formulate a novel optimization problem for TFL, consisting of two key modules: Client Topology Learning and Learning on Client Topology.
Empirical evaluation on a variety of real-world datasets verifies TFL's superior OOF robustness and scalability.
arXiv Detail & Related papers (2024-07-06T03:57:05Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Federated Learning for Semantic Parsing: Task Formulation, Evaluation
Setup, New Algorithms [29.636944156801327]
Multiple clients collaboratively train one global model without sharing their semantic parsing data.
Lorar adjusts each client's contribution to the global model update based on its training loss reduction during each round.
Clients with smaller datasets enjoy larger performance gains.
arXiv Detail & Related papers (2023-05-26T19:25:49Z) - Re-Weighted Softmax Cross-Entropy to Control Forgetting in Federated
Learning [14.196701066823499]
In Federated Learning, a global model is learned by aggregating model updates computed at a set of independent client nodes.
We show that individual client models experience a catastrophic forgetting with respect to data from other clients.
We propose an efficient approach that modifies the cross-entropy objective on a per-client basis by re-weighting the softmax logits prior to computing the loss.
arXiv Detail & Related papers (2023-04-11T14:51:55Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - To Federate or Not To Federate: Incentivizing Client Participation in
Federated Learning [22.3101738137465]
Federated learning (FL) facilitates collaboration between a group of clients who seek to train a common machine learning model.
In this paper, we propose an algorithm called IncFL that explicitly maximizes the fraction of clients who are incentivized to use the global model.
arXiv Detail & Related papers (2022-05-30T04:03:31Z) - Game of Gradients: Mitigating Irrelevant Clients in Federated Learning [3.2095659532757916]
Federated learning (FL) deals with multiple clients participating in collaborative training of a machine learning model under the orchestration of a central server.
In this setup, each client's data is private to itself and is not transferable to other clients or the server.
We refer to these problems as Federated Relevant Client Selection (FRCS)
arXiv Detail & Related papers (2021-10-23T16:34:42Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.