PraFFL: A Preference-Aware Scheme in Fair Federated Learning
- URL: http://arxiv.org/abs/2404.08973v1
- Date: Sat, 13 Apr 2024 11:40:05 GMT
- Title: PraFFL: A Preference-Aware Scheme in Fair Federated Learning
- Authors: Rongguang Ye, Ming Tang,
- Abstract summary: We propose a Preference-aware scheme in Fair Federated Learning paradigm (called PraFFL)
PraFFL can adaptively adjust the model based on each client's preferences to meet their needs.
- Score: 6.6763659758988885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Fairness in federated learning has emerged as a critical concern, aiming to develop an unbiased model for any special group (e.g., male or female) of sensitive features. However, there is a trade-off between model performance and fairness, i.e., improving fairness will decrease model performance. Existing approaches have characterized such a trade-off by introducing hyperparameters to quantify client's preferences for fairness and model performance. Nevertheless, these methods are limited to scenarios where each client has only a single pre-defined preference. In practical systems, each client may simultaneously have multiple preferences for the model performance and fairness. The key challenge is to design a method that allows the model to adapt to diverse preferences of each client in real time. To this end, we propose a Preference-aware scheme in Fair Federated Learning paradigm (called PraFFL). PraFFL can adaptively adjust the model based on each client's preferences to meet their needs. We theoretically prove that PraFFL can provide the optimal model for client's arbitrary preferences. Experimental results show that our proposed PraFFL outperforms five existing fair federated learning algorithms in terms of the model's capability in adapting to clients' different preferences.
Related papers
- Emulating Full Client Participation: A Long-Term Client Selection Strategy for Federated Learning [48.94952630292219]
We propose a novel client selection strategy designed to emulate the performance achieved with full client participation.
In a single round, we select clients by minimizing the gradient-space estimation error between the client subset and the full client set.
In multi-round selection, we introduce a novel individual fairness constraint, which ensures that clients with similar data distributions have similar frequencies of being selected.
arXiv Detail & Related papers (2024-05-22T12:27:24Z) - Federated Learning While Providing Model as a Service: Joint Training
and Inference Optimization [30.305956110710266]
Federated learning is beneficial for enabling the training of models across distributed clients.
Existing work has overlooked the coexistence of model training and inference under clients' limited resources.
This paper focuses on the joint optimization of model training and inference to maximize inference performance at clients.
arXiv Detail & Related papers (2023-12-20T09:27:09Z) - FedJETs: Efficient Just-In-Time Personalization with Federated Mixture
of Experts [48.78037006856208]
FedJETs is a novel solution by using a Mixture-of-Experts (MoE) framework within a Federated Learning (FL) setup.
Our method leverages the diversity of the clients to train specialized experts on different subsets of classes, and a gating function to route the input to the most relevant expert(s)
Our approach can improve accuracy up to 18% in state of the art FL settings, while maintaining competitive zero-shot performance.
arXiv Detail & Related papers (2023-06-14T15:47:52Z) - Confidence-aware Personalized Federated Learning via Variational
Expectation Maximization [34.354154518009956]
We present a novel framework for personalized Federated Learning (PFL)
PFL is a distributed learning scheme to train a shared model across clients.
We present a novel framework for PFL based on hierarchical modeling and variational inference.
arXiv Detail & Related papers (2023-05-21T20:12:27Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Personalizing or Not: Dynamically Personalized Federated Learning with
Incentives [37.42347737911428]
We propose personalized federated learning (FL) for learning personalized models without sharing private data.
We introduce the personalization rate, measured as the fraction of clients willing to train personalized models, into federated settings and propose DyPFL.
This technique incentivizes clients to participate in personalizing local models while allowing the adoption of the global model when it performs better.
arXiv Detail & Related papers (2022-08-12T09:51:20Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z) - An Efficiency-boosting Client Selection Scheme for Federated Learning
with Fairness Guarantee [36.07970788489]
Federated Learning is a new paradigm to cope with the privacy issue by allowing clients to perform model training locally.
The client selection policy is critical to an FL process in terms of training efficiency, the final model's quality as well as fairness.
In this paper, we will model the fairness guaranteed client selection as a Lyapunov optimization problem and then a C2MAB-based method is proposed for estimation of the model exchange time.
arXiv Detail & Related papers (2020-11-03T15:27:02Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.