Performative Federated Learning: A Solution to Model-Dependent and
Heterogeneous Distribution Shifts
- URL: http://arxiv.org/abs/2305.05090v1
- Date: Mon, 8 May 2023 23:29:24 GMT
- Title: Performative Federated Learning: A Solution to Model-Dependent and
Heterogeneous Distribution Shifts
- Authors: Kun Jin, Tongxin Yin, Zhongzhu Chen, Zeyu Sun, Xueru Zhang, Yang Liu,
Mingyan Liu
- Abstract summary: We consider a federated learning (FL) system consisting of multiple clients and a server.
Unlike the conventional FL framework that assumes the client's data is static, we consider scenarios where the clients' data distributions may be reshaped by the deployed decision model.
- Score: 24.196279060605402
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: We consider a federated learning (FL) system consisting of multiple clients
and a server, where the clients aim to collaboratively learn a common decision
model from their distributed data. Unlike the conventional FL framework that
assumes the client's data is static, we consider scenarios where the clients'
data distributions may be reshaped by the deployed decision model. In this
work, we leverage the idea of distribution shift mappings in performative
prediction to formalize this model-dependent data distribution shift and
propose a performative federated learning framework. We first introduce
necessary and sufficient conditions for the existence of a unique performative
stable solution and characterize its distance to the performative optimal
solution. Then we propose the performative FedAvg algorithm and show that it
converges to the performative stable solution at a rate of O(1/T) under both
full and partial participation schemes. In particular, we use novel proof
techniques and show how the clients' heterogeneity influences the convergence.
Numerical results validate our analysis and provide valuable insights into
real-world applications.
Related papers
- Distributionally Robust Clustered Federated Learning: A Case Study in Healthcare [9.433126190164224]
We introduce a novel algorithm, which we term Cross-silo Robust Clustered Federated Learning (CS-RCFL)
We construct ambiguity sets around each client's empirical distribution that capture possible distribution shifts in the local data.
We then propose a model-agnostic integer fractional program to determine the optimal distributionally robust clustering of clients into coalitions.
arXiv Detail & Related papers (2024-10-09T16:25:01Z) - FedImpro: Measuring and Improving Client Update in Federated Learning [77.68805026788836]
Federated Learning (FL) models often experience client drift caused by heterogeneous data.
We present an alternative perspective on client drift and aim to mitigate it by generating improved local models.
arXiv Detail & Related papers (2024-02-10T18:14:57Z) - Federated Learning with Projected Trajectory Regularization [65.6266768678291]
Federated learning enables joint training of machine learning models from distributed clients without sharing their local data.
One key challenge in federated learning is to handle non-identically distributed data across the clients.
We propose a novel federated learning framework with projected trajectory regularization (FedPTR) for tackling the data issue.
arXiv Detail & Related papers (2023-12-22T02:12:08Z) - FedCiR: Client-Invariant Representation Learning for Federated Non-IID
Features [15.555538379806135]
Federated learning (FL) is a distributed learning paradigm that maximizes the potential of data-driven models for edge devices without sharing their raw data.
We propose FedCiR, a client-invariant representation learning framework that enables clients to extract informative and client-invariant features.
arXiv Detail & Related papers (2023-08-30T06:36:32Z) - Towards Instance-adaptive Inference for Federated Learning [80.38701896056828]
Federated learning (FL) is a distributed learning paradigm that enables multiple clients to learn a powerful global model by aggregating local training.
In this paper, we present a novel FL algorithm, i.e., FedIns, to handle intra-client data heterogeneity by enabling instance-adaptive inference in the FL framework.
Our experiments show that our FedIns outperforms state-of-the-art FL algorithms, e.g., a 6.64% improvement against the top-performing method with less than 15% communication cost on Tiny-ImageNet.
arXiv Detail & Related papers (2023-08-11T09:58:47Z) - Federated Variational Inference: Towards Improved Personalization and
Generalization [2.37589914835055]
We study personalization and generalization in stateless cross-device federated learning setups.
We first propose a hierarchical generative model and formalize it using Bayesian Inference.
We then approximate this process using Variational Inference to train our model efficiently.
We evaluate our model on FEMNIST and CIFAR-100 image classification and show that FedVI beats the state-of-the-art on both tasks.
arXiv Detail & Related papers (2023-05-23T04:28:07Z) - FedHB: Hierarchical Bayesian Federated Learning [11.936836827864095]
We propose a novel hierarchical Bayesian approach to Federated Learning (FL)
Our model reasonably describes the generative process of clients' local data via hierarchical Bayesian modeling.
We show that our block-coordinate FL algorithm converges to an optimum of the objective at the rate of $O(sqrtt)$.
arXiv Detail & Related papers (2023-05-08T18:21:41Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Federated Multi-Task Learning under a Mixture of Distributions [10.00087964926414]
Federated Learning (FL) is a framework for on-device collaborative training of machine learning models.
First efforts in FL focused on learning a single global model with good average performance across clients, but the global model may be arbitrarily bad for a given client.
We study federated MTL under the flexible assumption that each local data distribution is a mixture of unknown underlying distributions.
arXiv Detail & Related papers (2021-08-23T15:47:53Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.