Federated $\mathcal{X}$-armed Bandit with Flexible Personalisation
- URL: http://arxiv.org/abs/2409.07251v1
- Date: Wed, 11 Sep 2024 13:19:41 GMT
- Title: Federated $\mathcal{X}$-armed Bandit with Flexible Personalisation
- Authors: Ali Arabzadeh, James A. Grant, David S. Leslie,
- Abstract summary: This paper introduces a novel approach to personalised federated learning within the $mathcalX$-armed bandit framework.
Our method employs a surrogate objective function that combines individual client preferences with aggregated global knowledge, allowing for a flexible trade-off between personalisation and collective learning.
- Score: 3.74142789780782
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: This paper introduces a novel approach to personalised federated learning within the $\mathcal{X}$-armed bandit framework, addressing the challenge of optimising both local and global objectives in a highly heterogeneous environment. Our method employs a surrogate objective function that combines individual client preferences with aggregated global knowledge, allowing for a flexible trade-off between personalisation and collective learning. We propose a phase-based elimination algorithm that achieves sublinear regret with logarithmic communication overhead, making it well-suited for federated settings. Theoretical analysis and empirical evaluations demonstrate the effectiveness of our approach compared to existing methods. Potential applications of this work span various domains, including healthcare, smart home devices, and e-commerce, where balancing personalisation with global insights is crucial.
Related papers
- Pursuing Overall Welfare in Federated Learning through Sequential Decision Making [10.377683220196873]
In traditional federated learning, a single global model cannot perform equally well for all clients.
Our work reveals that existing fairness-aware aggregation strategies can be unified into an online convex optimization framework.
AAggFF achieves better degree of client-level fairness than existing methods in both practical settings.
arXiv Detail & Related papers (2024-05-31T14:15:44Z) - Adaptive Global-Local Representation Learning and Selection for
Cross-Domain Facial Expression Recognition [54.334773598942775]
Domain shift poses a significant challenge in Cross-Domain Facial Expression Recognition (CD-FER)
We propose an Adaptive Global-Local Representation Learning and Selection framework.
arXiv Detail & Related papers (2024-01-20T02:21:41Z) - Personalized Federated Learning with Feature Alignment and Classifier
Collaboration [13.320381377599245]
Data heterogeneity is one of the most challenging issues in federated learning.
One such approach in deep neural networks based tasks is employing a shared feature representation and learning a customized classifier head for each client.
In this work, we conduct explicit local-global feature alignment by leveraging global semantic knowledge for learning a better representation.
arXiv Detail & Related papers (2023-06-20T19:58:58Z) - FedCBO: Reaching Group Consensus in Clustered Federated Learning through
Consensus-based Optimization [1.911678487931003]
Federated learning seeks to integrate the training learning models from multiple users, each user having their own data set, in a way that is sensitive to data privacy and to communication loss constraints.
In this paper, we propose a novel solution to a global, clustered problem of federated learning that is inspired by ideas in consensus-based optimization (CBO)
Our new CBO-type method is based on a system of interacting particles that is oblivious to group.
arXiv Detail & Related papers (2023-05-04T15:02:09Z) - Combating Exacerbated Heterogeneity for Robust Models in Federated
Learning [91.88122934924435]
Combination of adversarial training and federated learning can lead to the undesired robustness deterioration.
We propose a novel framework called Slack Federated Adversarial Training (SFAT)
We verify the rationality and effectiveness of SFAT on various benchmarked and real-world datasets.
arXiv Detail & Related papers (2023-03-01T06:16:15Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Straggler-Resilient Personalized Federated Learning [55.54344312542944]
Federated learning allows training models from samples distributed across a large network of clients while respecting privacy and communication restrictions.
We develop a novel algorithmic procedure with theoretical speedup guarantees that simultaneously handles two of these hurdles.
Our method relies on ideas from representation learning theory to find a global common representation using all clients' data and learn a user-specific set of parameters leading to a personalized solution for each client.
arXiv Detail & Related papers (2022-06-05T01:14:46Z) - DRFLM: Distributionally Robust Federated Learning with Inter-client
Noise via Local Mixup [58.894901088797376]
federated learning has emerged as a promising approach for training a global model using data from multiple organizations without leaking their raw data.
We propose a general framework to solve the above two challenges simultaneously.
We provide comprehensive theoretical analysis including robustness analysis, convergence analysis, and generalization ability.
arXiv Detail & Related papers (2022-04-16T08:08:29Z) - Decentralized Reinforcement Learning: Global Decision-Making via Local
Economic Transactions [80.49176924360499]
We establish a framework for directing a society of simple, specialized, self-interested agents to solve sequential decision problems.
We derive a class of decentralized reinforcement learning algorithms.
We demonstrate the potential advantages of a society's inherent modular structure for more efficient transfer learning.
arXiv Detail & Related papers (2020-07-05T16:41:09Z) - Adaptive Personalized Federated Learning [20.80073507382737]
Investigation of the degree of personalization in federated learning algorithms has shown that only maximizing the performance of the global model will train the capacity of the local models to personalize.
arXiv Detail & Related papers (2020-03-30T13:19:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.