Incentive Mechanism Design for Unbiased Federated Learning with
Randomized Client Participation
- URL: http://arxiv.org/abs/2304.07981v1
- Date: Mon, 17 Apr 2023 04:05:57 GMT
- Title: Incentive Mechanism Design for Unbiased Federated Learning with
Randomized Client Participation
- Authors: Bing Luo, Yutong Feng, Shiqiang Wang, Jianwei Huang, Leandros
Tassiulas
- Abstract summary: This paper proposes a game theoretic incentive mechanism for federated learning (FL) with randomized client participation.
We show that our mechanism achieves higher model performance for the server as well as higher profits for the clients.
- Score: 31.2017942327673
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Incentive mechanism is crucial for federated learning (FL) when rational
clients do not have the same interests in the global model as the server.
However, due to system heterogeneity and limited budget, it is generally
impractical for the server to incentivize all clients to participate in all
training rounds (known as full participation). The existing FL incentive
mechanisms are typically designed by stimulating a fixed subset of clients
based on their data quantity or system resources. Hence, FL is performed only
using this subset of clients throughout the entire training process, leading to
a biased model because of data heterogeneity. This paper proposes a game
theoretic incentive mechanism for FL with randomized client participation,
where the server adopts a customized pricing strategy that motivates different
clients to join with different participation levels (probabilities) for
obtaining an unbiased and high performance model. Each client responds to the
server's monetary incentive by choosing its best participation level, to
maximize its profit based on not only the incurred local cost but also its
intrinsic value for the global model. To effectively evaluate clients'
contribution to the model performance, we derive a new convergence bound which
analytically predicts how clients' arbitrary participation levels and their
heterogeneous data affect the model performance. By solving a non-convex
optimization problem, our analysis reveals that the intrinsic value leads to
the interesting possibility of bidirectional payment between the server and
clients. Experimental results using real datasets on a hardware prototype
demonstrate the superiority of our mechanism in achieving higher model
performance for the server as well as higher profits for the clients.
Related papers
- SFedCA: Credit Assignment-Based Active Client Selection Strategy for Spiking Federated Learning [15.256986486372407]
Spiking federated learning allows resource-constrained devices to train collaboratively at low power consumption without exchanging local data.
Existing spiking federated learning methods employ a random selection approach for client aggregation, assuming unbiased client participation.
We propose a credit assignment-based active client selection strategy, the SFedCA, to judiciously aggregate clients that contribute to the global sample distribution balance.
arXiv Detail & Related papers (2024-06-18T01:56:22Z) - IMFL-AIGC: Incentive Mechanism Design for Federated Learning Empowered by Artificial Intelligence Generated Content [15.620004060097155]
Federated learning (FL) has emerged as a promising paradigm that enables clients to collaboratively train a shared global model without uploading their local data.
We propose a data quality-aware incentive mechanism to encourage clients' participation.
Our proposed mechanism exhibits highest training accuracy and reduces up to 53.34% of the server's cost with real-world datasets.
arXiv Detail & Related papers (2024-06-12T07:47:22Z) - Federated Learning as a Network Effects Game [32.264180198812745]
Federated Learning (FL) aims to foster collaboration among a population of clients to improve the accuracy of machine learning without directly sharing local data.
In practice, clients may not benefit from joining in FL, especially in light of potential costs related to issues such as privacy and computation.
We are the first to model clients' behaviors in FL as a network effects game, where each client's benefit depends on other clients who also join the network.
arXiv Detail & Related papers (2023-02-16T19:10:12Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - To Federate or Not To Federate: Incentivizing Client Participation in
Federated Learning [22.3101738137465]
Federated learning (FL) facilitates collaboration between a group of clients who seek to train a common machine learning model.
In this paper, we propose an algorithm called IncFL that explicitly maximizes the fraction of clients who are incentivized to use the global model.
arXiv Detail & Related papers (2022-05-30T04:03:31Z) - A Contract Theory based Incentive Mechanism for Federated Learning [52.24418084256517]
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
arXiv Detail & Related papers (2021-08-12T07:30:42Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Prior-Free Auctions for the Demand Side of Federated Learning [0.76146285961466]
Federated learning allows distributed clients to learn a shared machine learning model without sharing their sensitive training data.
We propose a mechanism, FIPFA, to collect monetary contributions from self-interested clients.
We run experiments on the MNIST dataset to test clients' model quality under FIPFA and FIPFA's incentive properties.
arXiv Detail & Related papers (2021-03-26T10:22:18Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.