DaringFed: A Dynamic Bayesian Persuasion Pricing for Online Federated Learning under Two-sided Incomplete Information
- URL: http://arxiv.org/abs/2505.05842v1
- Date: Fri, 09 May 2025 07:24:21 GMT
- Title: DaringFed: A Dynamic Bayesian Persuasion Pricing for Online Federated Learning under Two-sided Incomplete Information
- Authors: Yun Xin, Jianfeng Lu, Shuqin Cao, Gang Li, Haozhao Wang, Guanghui Wen,
- Abstract summary: Online Federated Learning (OFL) is a real-time learning paradigm that sequentially executes parameter aggregation for each random arriving client.<n>To motivate clients to participate in OFL, it is crucial to offer appropriate incentives to offset the training resource consumption.<n>We design a novel Dynamic Bayesian persuasion pricing for online Federated learning (DaringFed) under TII.
- Score: 13.386578542329545
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online Federated Learning (OFL) is a real-time learning paradigm that sequentially executes parameter aggregation immediately for each random arriving client. To motivate clients to participate in OFL, it is crucial to offer appropriate incentives to offset the training resource consumption. However, the design of incentive mechanisms in OFL is constrained by the dynamic variability of Two-sided Incomplete Information (TII) concerning resources, where the server is unaware of the clients' dynamically changing computational resources, while clients lack knowledge of the real-time communication resources allocated by the server. To incentivize clients to participate in training by offering dynamic rewards to each arriving client, we design a novel Dynamic Bayesian persuasion pricing for online Federated learning (DaringFed) under TII. Specifically, we begin by formulating the interaction between the server and clients as a dynamic signaling and pricing allocation problem within a Bayesian persuasion game, and then demonstrate the existence of a unique Bayesian persuasion Nash equilibrium. By deriving the optimal design of DaringFed under one-sided incomplete information, we further analyze the approximate optimal design of DaringFed with a specific bound under TII. Finally, extensive evaluation conducted on real datasets demonstrate that DaringFed optimizes accuracy and converges speed by 16.99%, while experiments with synthetic datasets validate the convergence of estimate unknown values and the effectiveness of DaringFed in improving the server's utility by up to 12.6%.
Related papers
- Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.<n>We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - TRAIL: Trust-Aware Client Scheduling for Semi-Decentralized Federated Learning [13.144501509175985]
We propose a TRust-Aware clIent scheduLing mechanism called TRAIL, which assesses client states and contributions.<n>We focus on a semi-decentralized FL framework where edge servers and clients train a shared global model using unreliable intra-cluster model aggregation and inter-cluster model consensus.<n>Experiments conducted on real-world datasets demonstrate that TRAIL outperforms state-of-the-art baselines, achieving an improvement of 8.7% in test accuracy and a reduction of 15.3% in training loss.
arXiv Detail & Related papers (2024-12-16T05:02:50Z) - A Potential Game Perspective in Federated Learning [7.066313314590149]
Federated learning (FL) is an emerging paradigm for training machine learning models across distributed clients.
We propose a potential game framework where each client's payoff is determined by their individual efforts and the rewards provided by the server.
arXiv Detail & Related papers (2024-11-18T18:06:44Z) - IMFL-AIGC: Incentive Mechanism Design for Federated Learning Empowered by Artificial Intelligence Generated Content [15.620004060097155]
Federated learning (FL) has emerged as a promising paradigm that enables clients to collaboratively train a shared global model without uploading their local data.
We propose a data quality-aware incentive mechanism to encourage clients' participation.
Our proposed mechanism exhibits highest training accuracy and reduces up to 53.34% of the server's cost with real-world datasets.
arXiv Detail & Related papers (2024-06-12T07:47:22Z) - FedCAda: Adaptive Client-Side Optimization for Accelerated and Stable Federated Learning [57.38427653043984]
Federated learning (FL) has emerged as a prominent approach for collaborative training of machine learning models across distributed clients.
We introduce FedCAda, an innovative federated client adaptive algorithm designed to tackle this challenge.
We demonstrate that FedCAda outperforms the state-of-the-art methods in terms of adaptability, convergence, stability, and overall performance.
arXiv Detail & Related papers (2024-05-20T06:12:33Z) - Incentive Mechanism Design for Unbiased Federated Learning with
Randomized Client Participation [31.2017942327673]
This paper proposes a game theoretic incentive mechanism for federated learning (FL) with randomized client participation.
We show that our mechanism achieves higher model performance for the server as well as higher profits for the clients.
arXiv Detail & Related papers (2023-04-17T04:05:57Z) - Federated Learning as a Network Effects Game [32.264180198812745]
Federated Learning (FL) aims to foster collaboration among a population of clients to improve the accuracy of machine learning without directly sharing local data.
In practice, clients may not benefit from joining in FL, especially in light of potential costs related to issues such as privacy and computation.
We are the first to model clients' behaviors in FL as a network effects game, where each client's benefit depends on other clients who also join the network.
arXiv Detail & Related papers (2023-02-16T19:10:12Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - A Contract Theory based Incentive Mechanism for Federated Learning [52.24418084256517]
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
arXiv Detail & Related papers (2021-08-12T07:30:42Z) - Toward Understanding the Influence of Individual Clients in Federated
Learning [52.07734799278535]
Federated learning allows clients to jointly train a global model without sending their private data to a central server.
We defined a new notion called em-Influence, quantify this influence over parameters, and proposed an effective efficient model to estimate this metric.
arXiv Detail & Related papers (2020-12-20T14:34:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.