Can Fair Federated Learning reduce the need for Personalisation?
- URL: http://arxiv.org/abs/2305.02728v1
- Date: Thu, 4 May 2023 11:03:33 GMT
- Title: Can Fair Federated Learning reduce the need for Personalisation?
- Authors: Alex Iacob, Pedro P. B. Gusm\~ao, Nicholas D. Lane
- Abstract summary: Federated Learning (FL) enables training ML models on edge clients without sharing data.
This paper evaluates two Fair FL (FFL) algorithms as starting points for personalisation.
We propose Personalisation-aware Federated Learning (PaFL) as a paradigm that pre-emptively uses personalisation losses during training.
- Score: 9.595853312558276
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) enables training ML models on edge clients without
sharing data. However, the federated model's performance on local data varies,
disincentivising the participation of clients who benefit little from FL. Fair
FL reduces accuracy disparity by focusing on clients with higher losses while
personalisation locally fine-tunes the model. Personalisation provides a
participation incentive when an FL model underperforms relative to one trained
locally. For situations where the federated model provides a lower accuracy
than a model trained entirely locally by a client, personalisation improves the
accuracy of the pre-trained federated weights to be similar to or exceed those
of the local client model. This paper evaluates two Fair FL (FFL) algorithms as
starting points for personalisation. Our results show that FFL provides no
benefit to relative performance in a language task and may double the number of
underperforming clients for an image task. Instead, we propose
Personalisation-aware Federated Learning (PaFL) as a paradigm that
pre-emptively uses personalisation losses during training. Our technique shows
a 50% reduction in the number of underperforming clients for the language task
while lowering the number of underperforming clients in the image task instead
of doubling it. Thus, evidence indicates that it may allow a broader set of
devices to benefit from FL and represents a promising avenue for future
experimentation and theoretical analysis.
Related papers
- Fairness-Aware Client Selection for Federated Learning [13.781019191483864]
Federated learning (FL) has enabled multiple data owners (a.k.a. FL clients) to train machine learning models collaboratively without revealing private data.
Since the FL server can only engage a limited number of clients in each training round, FL client selection has become an important research problem.
We propose the Fairness-aware Federated Client Selection (FairFedCS) approach. Based on Lyapunov optimization, it dynamically adjusts FL clients' selection probabilities by jointly considering their reputations, times of participation in FL tasks and contributions to the resulting model performance.
arXiv Detail & Related papers (2023-07-20T10:04:55Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - Efficient Personalized Federated Learning via Sparse Model-Adaptation [47.088124462925684]
Federated Learning (FL) aims to train machine learning models for multiple clients without sharing their own private data.
We propose pFedGate for efficient personalized FL by adaptively and efficiently learning sparse local models.
We show that pFedGate achieves superior global accuracy, individual accuracy and efficiency simultaneously over state-of-the-art methods.
arXiv Detail & Related papers (2023-05-04T12:21:34Z) - Federated Learning of Shareable Bases for Personalization-Friendly Image
Classification [54.72892987840267]
FedBasis learns a set of few shareable basis'' models, which can be linearly combined to form personalized models for clients.
Specifically for a new client, only a small set of combination coefficients, not the model weights, needs to be learned.
To demonstrate the effectiveness and applicability of FedBasis, we also present a more practical PFL testbed for image classification.
arXiv Detail & Related papers (2023-04-16T20:19:18Z) - Personalized Federated Learning on Long-Tailed Data via Adversarial
Feature Augmentation [24.679535905451758]
PFL aims to learn personalized models for each client based on the knowledge across all clients in a privacy-preserving manner.
Existing PFL methods assume that the underlying global data across all clients are uniformly distributed without considering the long-tail distribution.
We propose Federated Learning with Adversarial Feature Augmentation (FedAFA) to address this joint problem in PFL.
arXiv Detail & Related papers (2023-03-27T13:00:20Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - No One Left Behind: Inclusive Federated Learning over Heterogeneous
Devices [79.16481453598266]
We propose InclusiveFL, a client-inclusive federated learning method to handle this problem.
The core idea of InclusiveFL is to assign models of different sizes to clients with different computing capabilities.
We also propose an effective method to share the knowledge among multiple local models with different sizes.
arXiv Detail & Related papers (2022-02-16T13:03:27Z) - Personalized Federated Learning with Moreau Envelopes [16.25105865597947]
Federated learning (FL) is a decentralized and privacy-preserving machine learning technique.
One challenge associated with FL is statistical diversity among clients.
We propose an algorithm for personalized FL (FedFedMe) using envelopes regularized loss function.
arXiv Detail & Related papers (2020-06-16T00:55:23Z) - Salvaging Federated Learning by Local Adaptation [26.915147034955925]
Federated learning (FL) is a heavily promoted approach for training ML models on sensitive data.
We look at FL from the emphlocal viewpoint of an individual participant and ask: do participants have an incentive to participate in FL?
We show that on standard tasks such as next-word prediction, many participants gain no benefit from FL because the federated model is less accurate on their data than the models they can train locally on their own.
We evaluate three techniques for local adaptation of federated models: fine-tuning, multi-task learning, and knowledge distillation.
arXiv Detail & Related papers (2020-02-12T01:56:16Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.