Asynchronous Upper Confidence Bound Algorithms for Federated Linear
Bandits
- URL: http://arxiv.org/abs/2110.01463v1
- Date: Mon, 4 Oct 2021 14:01:32 GMT
- Title: Asynchronous Upper Confidence Bound Algorithms for Federated Linear
Bandits
- Authors: Chuanhao Li and Hongning Wang
- Abstract summary: We propose a general framework with asynchronous model update and communication for a collection of homogeneous clients and heterogeneous clients.
Rigorous theoretical analysis is provided about the regret and communication cost under this distributed learning framework.
- Score: 35.47147821038291
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Linear contextual bandit is a popular online learning problem. It has been
mostly studied in centralized learning settings. With the surging demand of
large-scale decentralized model learning, e.g., federated learning, how to
retain regret minimization while reducing communication cost becomes an open
challenge. In this paper, we study linear contextual bandit in a federated
learning setting. We propose a general framework with asynchronous model update
and communication for a collection of homogeneous clients and heterogeneous
clients, respectively. Rigorous theoretical analysis is provided about the
regret and communication cost under this distributed learning framework; and
extensive empirical evaluations demonstrate the effectiveness of our solution.
Related papers
- Accelerated Stochastic ExtraGradient: Mixing Hessian and Gradient Similarity to Reduce Communication in Distributed and Federated Learning [50.382793324572845]
Distributed computing involves communication between devices, which requires solving two key problems: efficiency and privacy.
In this paper, we analyze a new method that incorporates the ideas of using data similarity and clients sampling.
To address privacy concerns, we apply the technique of additional noise and analyze its impact on the convergence of the proposed method.
arXiv Detail & Related papers (2024-09-22T00:49:10Z) - Federated Linear Contextual Bandits with Heterogeneous Clients [44.20391610280271]
Federated bandit learning is a promising framework for private, efficient, and decentralized online learning.
We introduce a new approach for federated bandits for heterogeneous clients, which clusters clients for collaborative bandit learning under the federated learning setting.
Our proposed algorithm achieves non-trivial sub-linear regret and communication cost for all clients, subject to the communication protocol under federated learning.
arXiv Detail & Related papers (2024-02-29T20:39:31Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Communication Efficient Distributed Learning for Kernelized Contextual
Bandits [58.78878127799718]
We tackle the communication efficiency challenge of learning kernelized contextual bandits in a distributed setting.
We consider non-linear reward mappings, by letting agents collaboratively search in a reproducing kernel Hilbert space.
We rigorously proved that our algorithm can attain sub-linear rate in both regret and communication cost.
arXiv Detail & Related papers (2022-06-10T01:39:15Z) - Communication Efficient Federated Learning for Generalized Linear
Bandits [39.1899551748345]
We study generalized linear bandit models under a federated learning setting.
We propose a communication-efficient solution framework that employs online regression for local update and offline regression for global update.
Our algorithm can attain sub-linear rate in both regret and communication cost.
arXiv Detail & Related papers (2022-02-02T15:31:45Z) - Finite-Time Consensus Learning for Decentralized Optimization with
Nonlinear Gossiping [77.53019031244908]
We present a novel decentralized learning framework based on nonlinear gossiping (NGO), that enjoys an appealing finite-time consensus property to achieve better synchronization.
Our analysis on how communication delay and randomized chats affect learning further enables the derivation of practical variants.
arXiv Detail & Related papers (2021-11-04T15:36:25Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.