Incentivized Communication for Federated Bandits
- URL: http://arxiv.org/abs/2309.11702v2
- Date: Mon, 23 Oct 2023 04:49:38 GMT
- Title: Incentivized Communication for Federated Bandits
- Authors: Zhepei Wei, Chuanhao Li, Haifeng Xu, Hongning Wang
- Abstract summary: We introduce an incentivized communication problem for federated bandits, where the server shall motivate clients to share data by providing incentives.
We propose the first incentivized communication protocol, namely, Inc-FedUCB, that achieves near-optimal regret with provable communication and incentive cost guarantees.
- Score: 67.4682056391551
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Most existing works on federated bandits take it for granted that all clients
are altruistic about sharing their data with the server for the collective good
whenever needed. Despite their compelling theoretical guarantee on performance
and communication efficiency, this assumption is overly idealistic and
oftentimes violated in practice, especially when the algorithm is operated over
self-interested clients, who are reluctant to share data without explicit
benefits. Negligence of such self-interested behaviors can significantly affect
the learning efficiency and even the practical operability of federated bandit
learning. In light of this, we aim to spark new insights into this
under-explored research area by formally introducing an incentivized
communication problem for federated bandits, where the server shall motivate
clients to share data by providing incentives. Without loss of generality, we
instantiate this bandit problem with the contextual linear setting and propose
the first incentivized communication protocol, namely, Inc-FedUCB, that
achieves near-optimal regret with provable communication and incentive cost
guarantees. Extensive empirical experiments on both synthetic and real-world
datasets further validate the effectiveness of the proposed method across
various environments.
Related papers
- Federated Linear Contextual Bandits with Heterogeneous Clients [44.20391610280271]
Federated bandit learning is a promising framework for private, efficient, and decentralized online learning.
We introduce a new approach for federated bandits for heterogeneous clients, which clusters clients for collaborative bandit learning under the federated learning setting.
Our proposed algorithm achieves non-trivial sub-linear regret and communication cost for all clients, subject to the communication protocol under federated learning.
arXiv Detail & Related papers (2024-02-29T20:39:31Z) - Incentivized Truthful Communication for Federated Bandits [61.759855777522255]
We propose an incentive compatible (i.e., truthful) communication protocol, named Truth-FedBan.
We show that Truth-FedBan still guarantees the sub-linear regret and communication cost without any overheads.
arXiv Detail & Related papers (2024-02-07T00:23:20Z) - Pure Exploration in Asynchronous Federated Bandits [57.02106627533004]
We study the federated pure exploration problem of multi-armed bandits and linear bandits, where $M$ agents cooperatively identify the best arm via communicating with the central server.
We propose the first asynchronous multi-armed bandit and linear bandit algorithms for pure exploration with fixed confidence.
arXiv Detail & Related papers (2023-10-17T06:04:00Z) - Momentum Benefits Non-IID Federated Learning Simply and Provably [22.800862422479913]
Federated learning is a powerful paradigm for large-scale machine learning.
FedAvg and SCAFFOLD are two prominent algorithms to address these challenges.
This paper explores the utilization of momentum to enhance the performance of FedAvg and SCAFFOLD.
arXiv Detail & Related papers (2023-06-28T18:52:27Z) - Incentivizing Honesty among Competitors in Collaborative Learning and
Optimization [5.4619385369457225]
Collaborative learning techniques have the potential to enable machine learning models that are superior to models trained on a single entity's data.
In many cases, potential participants in such collaborative schemes are competitors on a downstream task.
arXiv Detail & Related papers (2023-05-25T17:28:41Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Mechanisms that Incentivize Data Sharing in Federated Learning [90.74337749137432]
We show how a naive scheme leads to catastrophic levels of free-riding where the benefits of data sharing are completely eroded.
We then introduce accuracy shaping based mechanisms to maximize the amount of data generated by each agent.
arXiv Detail & Related papers (2022-07-10T22:36:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.