Federated Linear Contextual Bandits with Heterogeneous Clients
- URL: http://arxiv.org/abs/2403.00116v1
- Date: Thu, 29 Feb 2024 20:39:31 GMT
- Title: Federated Linear Contextual Bandits with Heterogeneous Clients
- Authors: Ethan Blaser, Chuanhao Li, Hongning Wang
- Abstract summary: Federated bandit learning is a promising framework for private, efficient, and decentralized online learning.
We introduce a new approach for federated bandits for heterogeneous clients, which clusters clients for collaborative bandit learning under the federated learning setting.
Our proposed algorithm achieves non-trivial sub-linear regret and communication cost for all clients, subject to the communication protocol under federated learning.
- Score: 44.20391610280271
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The demand for collaborative and private bandit learning across multiple
agents is surging due to the growing quantity of data generated from
distributed systems. Federated bandit learning has emerged as a promising
framework for private, efficient, and decentralized online learning. However,
almost all previous works rely on strong assumptions of client homogeneity,
i.e., all participating clients shall share the same bandit model; otherwise,
they all would suffer linear regret. This greatly restricts the application of
federated bandit learning in practice. In this work, we introduce a new
approach for federated bandits for heterogeneous clients, which clusters
clients for collaborative bandit learning under the federated learning setting.
Our proposed algorithm achieves non-trivial sub-linear regret and communication
cost for all clients, subject to the communication protocol under federated
learning that at anytime only one model can be shared by the server.
Related papers
- Incentivized Communication for Federated Bandits [67.4682056391551]
We introduce an incentivized communication problem for federated bandits, where the server shall motivate clients to share data by providing incentives.
We propose the first incentivized communication protocol, namely, Inc-FedUCB, that achieves near-optimal regret with provable communication and incentive cost guarantees.
arXiv Detail & Related papers (2023-09-21T00:59:20Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - FedHP: Heterogeneous Federated Learning with Privacy-preserving [0.0]
Federated learning is a distributed machine learning environment, which ensures that clients complete collaborative training without sharing private data, only by exchanging parameters.
We propose a novel federated learning method, which consists of the pre-trained model as the backbone and fully connected layers as the head.
By sharing the embedding vector of classes, instead of parameters based on gradient space, clients can better adapt to private data, and it is more efficient in the communication between the server and clients.
arXiv Detail & Related papers (2023-01-27T13:32:17Z) - FedClassAvg: Local Representation Learning for Personalized Federated
Learning on Heterogeneous Neural Networks [21.613436984547917]
We propose a novel personalized federated learning method called federated classifier averaging (FedClassAvg)
FedClassAvg aggregates weights as an agreement on decision boundaries on feature spaces.
We demonstrate it outperforms the current state-of-the-art algorithms on heterogeneous personalized federated learning tasks.
arXiv Detail & Related papers (2022-10-25T08:32:08Z) - On the Convergence of Clustered Federated Learning [57.934295064030636]
In a federated learning system, the clients, e.g. mobile devices and organization participants, usually have different personal preferences or behavior patterns.
This paper proposes a novel weighted client-based clustered FL algorithm to leverage the client's group and each client in a unified optimization framework.
arXiv Detail & Related papers (2022-02-13T02:39:19Z) - Asynchronous Upper Confidence Bound Algorithms for Federated Linear
Bandits [35.47147821038291]
We propose a general framework with asynchronous model update and communication for a collection of homogeneous clients and heterogeneous clients.
Rigorous theoretical analysis is provided about the regret and communication cost under this distributed learning framework.
arXiv Detail & Related papers (2021-10-04T14:01:32Z) - Federated Self-Supervised Contrastive Learning via Ensemble Similarity
Distillation [42.05438626702343]
This paper investigates the feasibility of learning good representation space with unlabeled client data in a federated scenario.
We propose a novel self-supervised contrastive learning framework that supports architecture-agnostic local training and communication-efficient global aggregation.
arXiv Detail & Related papers (2021-09-29T02:13:22Z) - Accurate and Fast Federated Learning via Combinatorial Multi-Armed
Bandits [11.972842369911872]
Federated learning involves the challenge of biased model averaging and lack of prior knowledge in client sampling.
We propose a novel algorithm called FedCM that addresses the two challenges by utilizing prior knowledge with multi-armed bandit based client sampling.
We show that FedCM significantly outperformed the state-of-the-art algorithms by up to 37.25% and 4.17 times, respectively, in terms of accuracy and convergence rate.
arXiv Detail & Related papers (2020-12-06T14:05:14Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.