Incentive Mechanism Design for Unbiased Federated Learning with
Randomized Client Participation
- URL: http://arxiv.org/abs/2304.07981v1
- Date: Mon, 17 Apr 2023 04:05:57 GMT
- Title: Incentive Mechanism Design for Unbiased Federated Learning with
Randomized Client Participation
- Authors: Bing Luo, Yutong Feng, Shiqiang Wang, Jianwei Huang, Leandros
Tassiulas
- Abstract summary: This paper proposes a game theoretic incentive mechanism for federated learning (FL) with randomized client participation.
We show that our mechanism achieves higher model performance for the server as well as higher profits for the clients.
- Score: 31.2017942327673
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Incentive mechanism is crucial for federated learning (FL) when rational
clients do not have the same interests in the global model as the server.
However, due to system heterogeneity and limited budget, it is generally
impractical for the server to incentivize all clients to participate in all
training rounds (known as full participation). The existing FL incentive
mechanisms are typically designed by stimulating a fixed subset of clients
based on their data quantity or system resources. Hence, FL is performed only
using this subset of clients throughout the entire training process, leading to
a biased model because of data heterogeneity. This paper proposes a game
theoretic incentive mechanism for FL with randomized client participation,
where the server adopts a customized pricing strategy that motivates different
clients to join with different participation levels (probabilities) for
obtaining an unbiased and high performance model. Each client responds to the
server's monetary incentive by choosing its best participation level, to
maximize its profit based on not only the incurred local cost but also its
intrinsic value for the global model. To effectively evaluate clients'
contribution to the model performance, we derive a new convergence bound which
analytically predicts how clients' arbitrary participation levels and their
heterogeneous data affect the model performance. By solving a non-convex
optimization problem, our analysis reveals that the intrinsic value leads to
the interesting possibility of bidirectional payment between the server and
clients. Experimental results using real datasets on a hardware prototype
demonstrate the superiority of our mechanism in achieving higher model
performance for the server as well as higher profits for the clients.
Related papers
- Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.
We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - Incentive-Compatible Federated Learning with Stackelberg Game Modeling [11.863770989724959]
We introduce FLamma, a novel Federated Learning framework based on adaptive gamma-based Stackelberg game.
Our approach allows the server to act as the leader, dynamically adjusting a decay factor while clients, acting as followers, optimally select their number of local epochs to maximize their utility.
Over time, the server incrementally balances client influence, initially rewarding higher-contributing clients and gradually leveling their impact, driving the system toward a Stackelberg Equilibrium.
arXiv Detail & Related papers (2025-01-05T21:04:41Z) - TRAIL: Trust-Aware Client Scheduling for Semi-Decentralized Federated Learning [13.144501509175985]
We propose a TRust-Aware clIent scheduLing mechanism called TRAIL, which assesses client states and contributions.
We focus on a semi-decentralized FL framework where edge servers and clients train a shared global model using unreliable intra-cluster model aggregation and inter-cluster model consensus.
Experiments conducted on real-world datasets demonstrate that TRAIL outperforms state-of-the-art baselines, achieving an improvement of 8.7% in test accuracy and a reduction of 15.3% in training loss.
arXiv Detail & Related papers (2024-12-16T05:02:50Z) - IMFL-AIGC: Incentive Mechanism Design for Federated Learning Empowered by Artificial Intelligence Generated Content [15.620004060097155]
Federated learning (FL) has emerged as a promising paradigm that enables clients to collaboratively train a shared global model without uploading their local data.
We propose a data quality-aware incentive mechanism to encourage clients' participation.
Our proposed mechanism exhibits highest training accuracy and reduces up to 53.34% of the server's cost with real-world datasets.
arXiv Detail & Related papers (2024-06-12T07:47:22Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Fed-CBS: A Heterogeneity-Aware Client Sampling Mechanism for Federated
Learning via Class-Imbalance Reduction [76.26710990597498]
We show that the class-imbalance of the grouped data from randomly selected clients can lead to significant performance degradation.
Based on our key observation, we design an efficient client sampling mechanism, i.e., Federated Class-balanced Sampling (Fed-CBS)
In particular, we propose a measure of class-imbalance and then employ homomorphic encryption to derive this measure in a privacy-preserving way.
arXiv Detail & Related papers (2022-09-30T05:42:56Z) - To Federate or Not To Federate: Incentivizing Client Participation in
Federated Learning [22.3101738137465]
Federated learning (FL) facilitates collaboration between a group of clients who seek to train a common machine learning model.
In this paper, we propose an algorithm called IncFL that explicitly maximizes the fraction of clients who are incentivized to use the global model.
arXiv Detail & Related papers (2022-05-30T04:03:31Z) - Towards Fair Federated Learning with Zero-Shot Data Augmentation [123.37082242750866]
Federated learning has emerged as an important distributed learning paradigm, where a server aggregates a global model from many client-trained models while having no access to the client data.
We propose a novel federated learning system that employs zero-shot data augmentation on under-represented data to mitigate statistical heterogeneity and encourage more uniform accuracy performance across clients in federated networks.
We study two variants of this scheme, Fed-ZDAC (federated learning with zero-shot data augmentation at the clients) and Fed-ZDAS (federated learning with zero-shot data augmentation at the server).
arXiv Detail & Related papers (2021-04-27T18:23:54Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z) - Personalized Federated Learning with First Order Model Optimization [76.81546598985159]
We propose an alternative to federated learning, where each client federates with other relevant clients to obtain a stronger model per client-specific objectives.
We do not assume knowledge of underlying data distributions or client similarities, and allow each client to optimize for arbitrary target distributions of interest.
Our method outperforms existing alternatives, while also enabling new features for personalized FL such as transfer outside of local data distributions.
arXiv Detail & Related papers (2020-12-15T19:30:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.