Federated X-Armed Bandit
- URL: http://arxiv.org/abs/2205.15268v3
- Date: Mon, 15 May 2023 15:24:30 GMT
- Title: Federated X-Armed Bandit
- Authors: Wenjie Li, Qifan Song, Jean Honorio, Guang Lin
- Abstract summary: This work establishes the first framework of federated $mathcalX$-armed bandit, where different clients face heterogeneous local objective functions defined on the same domain.
We propose the first federated algorithm for such problems, named textttFed-PNE.
- Score: 36.39045680578951
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This work establishes the first framework of federated $\mathcal{X}$-armed
bandit, where different clients face heterogeneous local objective functions
defined on the same domain and are required to collaboratively figure out the
global optimum. We propose the first federated algorithm for such problems,
named \texttt{Fed-PNE}. By utilizing the topological structure of the global
objective inside the hierarchical partitioning and the weak smoothness
property, our algorithm achieves sublinear cumulative regret with respect to
both the number of clients and the evaluation budget. Meanwhile, it only
requires logarithmic communications between the central server and clients,
protecting the client privacy. Experimental results on synthetic functions and
real datasets validate the advantages of \texttt{Fed-PNE} over various
centralized and federated baseline algorithms.
Related papers
- FedCCL: Federated Dual-Clustered Feature Contrast Under Domain Heterogeneity [43.71967577443732]
Federated learning (FL) facilitates a privacy-preserving neural network training paradigm through collaboration between edge clients and a central server.
Recent research is limited to simply using averaged signals as a form of regularization and only focusing on one aspect of these non-IID challenges.
We propose a dual-clustered feature contrast-based FL framework with dual focuses.
arXiv Detail & Related papers (2024-04-14T13:56:30Z) - Privacy-preserving Federated Primal-dual Learning for Non-convex and Non-smooth Problems with Model Sparsification [51.04894019092156]
Federated learning (FL) has been recognized as a rapidly growing area, where the model is trained over clients under the FL orchestration (PS)
In this paper, we propose a novel primal sparification algorithm for and guarantee non-smooth FL problems.
Its unique insightful properties and its analyses are also presented.
arXiv Detail & Related papers (2023-10-30T14:15:47Z) - Personalized Federated X -armed Bandit [44.7483638955294]
We study the personalized federated $mathcalX$-armed bandit problem, where the heterogeneous local objectives of the clients are optimized simultaneously in the federated learning paradigm.
We propose the textttPF-PNE algorithm with a unique double elimination strategy, which safely eliminates the non-optimal regions while encouraging federated collaboration through biased but effective evaluations of the local objectives.
arXiv Detail & Related papers (2023-10-25T03:11:32Z) - Locally Adaptive Federated Learning [30.19411641685853]
Federated learning is a paradigm of distributed machine learning in which multiple clients coordinate with a central server to learn a model.
Standard federated optimization methods such as Federated Averaging (FedAvg) ensure generalization among the clients.
We propose locally federated learning algorithms, that leverage the local geometric information for each client function.
arXiv Detail & Related papers (2023-07-12T17:02:32Z) - Dynamic Regularized Sharpness Aware Minimization in Federated Learning: Approaching Global Consistency and Smooth Landscape [59.841889495864386]
In federated learning (FL), a cluster of local clients are chaired under the coordination of a global server.
Clients are prone to overfit into their own optima, which extremely deviates from the global objective.
ttfamily FedSMOO adopts a dynamic regularizer to guarantee the local optima towards the global objective.
Our theoretical analysis indicates that ttfamily FedSMOO achieves fast $mathcalO (1/T)$ convergence rate with low bound generalization.
arXiv Detail & Related papers (2023-05-19T10:47:44Z) - Mining Latent Relationships among Clients: Peer-to-peer Federated
Learning with Adaptive Neighbor Matching [6.959557494221414]
In federated learning (FL), clients may have diverse objectives, merging all clients' knowledge into one global model will cause negative transfers to local performance.
We take advantage of peer-to-peer (P2P) FL, where clients communicate with neighbors without a central server.
We propose an algorithm that enables clients to form an effective communication topology in a decentralized manner without assuming the number of clusters.
arXiv Detail & Related papers (2022-03-23T09:10:14Z) - Faster Non-Convex Federated Learning via Global and Local Momentum [57.52663209739171]
textttFedGLOMO is the first (first-order) FLtexttFedGLOMO algorithm.
Our algorithm is provably optimal even with communication between the clients and the server.
arXiv Detail & Related papers (2020-12-07T21:05:31Z) - Cross-Domain Facial Expression Recognition: A Unified Evaluation
Benchmark and Adversarial Graph Learning [85.6386289476598]
We develop a novel adversarial graph representation adaptation (AGRA) framework for cross-domain holistic-local feature co-adaptation.
We conduct extensive and fair evaluations on several popular benchmarks and show that the proposed AGRA framework outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2020-08-03T15:00:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.