BARA: Efficient Incentive Mechanism with Online Reward Budget Allocation
in Cross-Silo Federated Learning
- URL: http://arxiv.org/abs/2305.05221v2
- Date: Tue, 16 May 2023 02:33:51 GMT
- Title: BARA: Efficient Incentive Mechanism with Online Reward Budget Allocation
in Cross-Silo Federated Learning
- Authors: Yunchao Yang, Yipeng Zhou, Miao Hu, Di Wu, Quan Z. Sheng
- Abstract summary: Federated learning (FL) is a prospective distributed machine learning framework that can preserve data privacy.
In cross-silo FL, an incentive mechanism is indispensable for motivating data owners to contribute their models to FL training.
We propose an online reward budget allocation algorithm using Bayesian optimization named BARA.
- Score: 25.596968764427043
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a prospective distributed machine learning
framework that can preserve data privacy. In particular, cross-silo FL can
complete model training by making isolated data islands of different
organizations collaborate with a parameter server (PS) via exchanging model
parameters for multiple communication rounds. In cross-silo FL, an incentive
mechanism is indispensable for motivating data owners to contribute their
models to FL training. However, how to allocate the reward budget among
different rounds is an essential but complicated problem largely overlooked by
existing works. The challenge of this problem lies in the opaque feedback
between reward budget allocation and model utility improvement of FL, making
the optimal reward budget allocation complicated. To address this problem, we
design an online reward budget allocation algorithm using Bayesian optimization
named BARA (\underline{B}udget \underline{A}llocation for \underline{R}everse
\underline{A}uction). Specifically, BARA can model the complicated relationship
between reward budget allocation and final model accuracy in FL based on
historical training records so that the reward budget allocated to each
communication round is dynamically optimized so as to maximize the final model
utility. We further incorporate the BARA algorithm into reverse auction-based
incentive mechanisms to illustrate its effectiveness. Extensive experiments are
conducted on real datasets to demonstrate that BARA significantly outperforms
competitive baselines by improving model utility with the same amount of reward
budget.
Related papers
- Enhancing Spectrum Efficiency in 6G Satellite Networks: A GAIL-Powered Policy Learning via Asynchronous Federated Inverse Reinforcement Learning [67.95280175998792]
A novel adversarial imitation learning (GAIL)-powered policy learning approach is proposed for optimizing beamforming, spectrum allocation, and remote user equipment (RUE) association ins.
We employ inverse RL (IRL) to automatically learn reward functions without manual tuning.
We show that the proposed MA-AL method outperforms traditional RL approaches, achieving a $14.6%$ improvement in convergence and reward value.
arXiv Detail & Related papers (2024-09-27T13:05:02Z) - Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - Cost-Sensitive Multi-Fidelity Bayesian Optimization with Transfer of Learning Curve Extrapolation [55.75188191403343]
We introduce utility, which is a function predefined by each user and describes the trade-off between cost and performance of BO.
We validate our algorithm on various LC datasets and found it outperform all the previous multi-fidelity BO and transfer-BO baselines we consider.
arXiv Detail & Related papers (2024-05-28T07:38:39Z) - FedHPL: Efficient Heterogeneous Federated Learning with Prompt Tuning and Logit Distillation [32.305134875959226]
Federated learning (FL) is a privacy-preserving paradigm that enables distributed clients to collaboratively train models with a central server.
We propose FedHPL, a parameter-efficient unified $textbfFed$erated learning framework for $textbfH$eterogeneous settings.
We show that our framework outperforms state-of-the-art FL approaches, with less overhead and training rounds.
arXiv Detail & Related papers (2024-05-27T15:25:32Z) - DPBalance: Efficient and Fair Privacy Budget Scheduling for Federated
Learning as a Service [15.94482624965024]
Federated learning (FL) has emerged as a prevalent distributed machine learning scheme.
We propose DPBalance, a novel privacy budget scheduling mechanism that jointly optimize both efficiency and fairness.
We show that DPBalance achieves an average efficiency improvement of $1.44times sim 3.49 times$, and an average fairness improvement of $1.37times sim 24.32 times$.
arXiv Detail & Related papers (2024-02-15T05:19:53Z) - Learner Referral for Cost-Effective Federated Learning Over Hierarchical
IoT Networks [21.76836812021954]
This paper aided federated selection (LRef-FedCS), communications resource, and local model accuracy (LMAO) methods.
Our proposed LRef-FedCS approach could achieve a good balance between high global accuracy and reducing cost.
arXiv Detail & Related papers (2023-07-19T13:33:43Z) - FedDM: Iterative Distribution Matching for Communication-Efficient
Federated Learning [87.08902493524556]
Federated learning(FL) has recently attracted increasing attention from academia and industry.
We propose FedDM to build the global training objective from multiple local surrogate functions.
In detail, we construct synthetic sets of data on each client to locally match the loss landscape from original data.
arXiv Detail & Related papers (2022-07-20T04:55:18Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - A Contract Theory based Incentive Mechanism for Federated Learning [52.24418084256517]
Federated learning (FL) serves as a data privacy-preserved machine learning paradigm, and realizes the collaborative model trained by distributed clients.
To accomplish an FL task, the task publisher needs to pay financial incentives to the FL server and FL server offloads the task to the contributing FL clients.
It is challenging to design proper incentives for the FL clients due to the fact that the task is privately trained by the clients.
arXiv Detail & Related papers (2021-08-12T07:30:42Z) - Collaborative Machine Learning with Incentive-Aware Model Rewards [32.43927226170119]
Collaborative machine learning (ML) is an appealing paradigm to build high-quality ML models by training on the aggregated data from many parties.
These parties are only willing to share their data when given enough incentives, such as a guaranteed fair reward based on their contributions.
This paper proposes to value a party's reward based on Shapley value and information gain on model parameters given its data.
arXiv Detail & Related papers (2020-10-24T06:20:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.