Communication-Efficient Federated Non-Linear Bandit Optimization
- URL: http://arxiv.org/abs/2311.01695v1
- Date: Fri, 3 Nov 2023 03:50:31 GMT
- Title: Communication-Efficient Federated Non-Linear Bandit Optimization
- Authors: Chuanhao Li, Chong Liu and Yu-Xiang Wang
- Abstract summary: We propose a new algorithm, named Fed-GO-UCB, for federated bandit optimization with generic non-linear objective function.
Under some mild conditions, we rigorously prove that Fed-GO-UCB is able to achieve sub-linear rate for both cumulative regret and communication cost.
- Score: 26.23638987873429
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated optimization studies the problem of collaborative function
optimization among multiple clients (e.g. mobile devices or organizations)
under the coordination of a central server. Since the data is collected
separately by each client and always remains decentralized, federated
optimization preserves data privacy and allows for large-scale computing, which
makes it a promising decentralized machine learning paradigm. Though it is
often deployed for tasks that are online in nature, e.g., next-word prediction
on keyboard apps, most works formulate it as an offline problem. The few
exceptions that consider federated bandit optimization are limited to very
simplistic function classes, e.g., linear, generalized linear, or
non-parametric function class with bounded RKHS norm, which severely hinders
its practical usage. In this paper, we propose a new algorithm, named
Fed-GO-UCB, for federated bandit optimization with generic non-linear objective
function. Under some mild conditions, we rigorously prove that Fed-GO-UCB is
able to achieve sub-linear rate for both cumulative regret and communication
cost. At the heart of our theoretical analysis are distributed regression
oracle and individual confidence set construction, which can be of independent
interests. Empirical evaluations also demonstrate the effectiveness of the
proposed algorithm.
Related papers
- Efficient and Robust Regularized Federated Recommendation [52.24782464815489]
The recommender system (RSRS) addresses both user preference and privacy concerns.
We propose a novel method that incorporates non-uniform gradient descent to improve communication efficiency.
RFRecF's superior robustness compared to diverse baselines.
arXiv Detail & Related papers (2024-11-03T12:10:20Z) - Nonconvex Federated Learning on Compact Smooth Submanifolds With Heterogeneous Data [23.661713049508375]
We propose an algorithm that learns over a submanifold in the setting of a client.
We show that our proposed algorithm converges sub-ly to a neighborhood of a first-order optimal solution by using a novel analysis.
arXiv Detail & Related papers (2024-06-12T17:53:28Z) - Serverless Federated AUPRC Optimization for Multi-Party Collaborative
Imbalanced Data Mining [119.89373423433804]
Area Under Precision-Recall (AUPRC) was introduced as an effective metric.
Serverless multi-party collaborative training can cut down the communications cost by avoiding the server node bottleneck.
We propose a new ServerLess biAsed sTochastic gradiEnt (SLATE) algorithm to directly optimize the AUPRC.
arXiv Detail & Related papers (2023-08-06T06:51:32Z) - Personalizing Federated Learning with Over-the-Air Computations [84.8089761800994]
Federated edge learning is a promising technology to deploy intelligence at the edge of wireless networks in a privacy-preserving manner.
Under such a setting, multiple clients collaboratively train a global generic model under the coordination of an edge server.
This paper presents a distributed training paradigm that employs analog over-the-air computation to address the communication bottleneck.
arXiv Detail & Related papers (2023-02-24T08:41:19Z) - Federated Gradient Matching Pursuit [17.695717854068715]
Traditional machine learning techniques require centralizing all training data on one server or data hub.
In particular, federated learning (FL) provides such a solution to learn a shared model while keeping training data at local clients.
We propose a novel algorithmic framework, federated gradient matching pursuit (FedGradMP), to solve the sparsity constrained minimization problem in the FL setting.
arXiv Detail & Related papers (2023-02-20T16:26:29Z) - Federated Offline Reinforcement Learning [55.326673977320574]
We propose a multi-site Markov decision process model that allows for both homogeneous and heterogeneous effects across sites.
We design the first federated policy optimization algorithm for offline RL with sample complexity.
We give a theoretical guarantee for the proposed algorithm, where the suboptimality for the learned policies is comparable to the rate as if data is not distributed.
arXiv Detail & Related papers (2022-06-11T18:03:26Z) - Communication-Efficient Robust Federated Learning with Noisy Labels [144.31995882209932]
Federated learning (FL) is a promising privacy-preserving machine learning paradigm over distributed located data.
We propose a learning-based reweighting approach to mitigate the effect of noisy labels in FL.
Our approach has shown superior performance on several real-world datasets compared to various baselines.
arXiv Detail & Related papers (2022-06-11T16:21:17Z) - Resource-constrained Federated Edge Learning with Heterogeneous Data:
Formulation and Analysis [8.863089484787835]
We propose a distributed approximate Newton-type Newton-type training scheme, namely FedOVA, to solve the heterogeneous statistical challenge brought by heterogeneous data.
FedOVA decomposes a multi-class classification problem into more straightforward binary classification problems and then combines their respective outputs using ensemble learning.
arXiv Detail & Related papers (2021-10-14T17:35:24Z) - Communication-Efficient Federated Linear and Deep Generalized Canonical
Correlation Analysis [13.04301271535511]
This work puts forth a communication-efficient federated learning framework for both linear and deep GCCA.
Compared to the unquantized version, our empirical study shows that the proposed algorithm enjoys a substantial reduction of communication overheads with virtually no loss in accuracy and convergence speed.
arXiv Detail & Related papers (2021-09-25T16:43:10Z) - FedPD: A Federated Learning Framework with Optimal Rates and Adaptivity
to Non-IID Data [59.50904660420082]
Federated Learning (FL) has become a popular paradigm for learning from distributed data.
To effectively utilize data at different devices without moving them to the cloud, algorithms such as the Federated Averaging (FedAvg) have adopted a "computation then aggregation" (CTA) model.
arXiv Detail & Related papers (2020-05-22T23:07:42Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.