A Communication-Efficient Adaptive Algorithm for Federated Learning
under Cumulative Regret
- URL: http://arxiv.org/abs/2301.08869v2
- Date: Mon, 5 Jun 2023 18:23:06 GMT
- Title: A Communication-Efficient Adaptive Algorithm for Federated Learning
under Cumulative Regret
- Authors: Sudeep Salgia, Qing Zhao, Tamir Gabay, Kobi Cohen
- Abstract summary: We develop a distributed online learning algorithm that achieves order-optimal cumulative regret with low communication cost measured in the total number of bits transmitted over the entire learning horizon.
- Score: 22.80372994021181
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We consider the problem of online stochastic optimization in a distributed
setting with $M$ clients connected through a central server. We develop a
distributed online learning algorithm that achieves order-optimal cumulative
regret with low communication cost measured in the total number of bits
transmitted over the entire learning horizon. This is in contrast to existing
studies which focus on the offline measure of simple regret for learning
efficiency. The holistic measure for communication cost also departs from the
prevailing approach that \emph{separately} tackles the communication frequency
and the number of bits in each communication round.
Related papers
- Federated Q-Learning with Reference-Advantage Decomposition: Almost Optimal Regret and Logarithmic Communication Cost [4.895986534376972]
We propose a novel model-free federated Q-learning algorithm, termed FedQ-Advantage.
Our algorithm not only requires a lower logarithmic communication cost but also achieves an almost optimal regret, reaching the information bound up to a logarithmic factor and near-linear regret speedup compared to its single-agent counterpart when the time horizon is sufficiently large.
arXiv Detail & Related papers (2024-05-29T06:26:52Z) - Federated Q-Learning: Linear Regret Speedup with Low Communication Cost [4.380110270510058]
We propose two federated Q-Learning algorithms termed as FedQ-Hoeffding and FedQ-Bernstein.
We show that the corresponding total regrets achieve a linear speedup compared with their single-agent counterparts when the time horizon is sufficiently large.
Those results rely on an event-triggered synchronization mechanism between the agents and the server.
arXiv Detail & Related papers (2023-12-22T19:14:09Z) - Composite federated learning with heterogeneous data [11.40641907024708]
We propose a novel algorithm for solving the composite Federated Learning (FL) problem.
This algorithm manages non-smooth regularization by strategically decoupling the proximal operator and communication, and addresses client drift without any assumptions about data similarity.
We prove that our algorithm converges linearly to a neighborhood of the optimal solution and demonstrate the superiority of our algorithm over state-of-the-art methods in numerical experiments.
arXiv Detail & Related papers (2023-09-04T20:22:57Z) - Communication-Efficient Decentralized Federated Learning via One-Bit
Compressive Sensing [52.402550431781805]
Decentralized federated learning (DFL) has gained popularity due to its practicality across various applications.
Compared to the centralized version, training a shared model among a large number of nodes in DFL is more challenging.
We develop a novel algorithm based on the framework of the inexact alternating direction method (iADM)
arXiv Detail & Related papers (2023-08-31T12:22:40Z) - QC-ODKLA: Quantized and Communication-Censored Online Decentralized
Kernel Learning via Linearized ADMM [30.795725108364724]
This paper focuses on online kernel learning over a decentralized network.
We propose a novel learning framework named Online Decentralized Kernel learning via Linearized ADMM.
arXiv Detail & Related papers (2022-08-04T17:16:27Z) - Fundamental Limits of Communication Efficiency for Model Aggregation in
Distributed Learning: A Rate-Distortion Approach [54.311495894129585]
We study the limit of communication cost of model aggregation in distributed learning from a rate-distortion perspective.
It is found that the communication gain by exploiting the correlation between worker nodes is significant for SignSGD.
arXiv Detail & Related papers (2022-06-28T13:10:40Z) - Communication Efficient Distributed Learning for Kernelized Contextual
Bandits [58.78878127799718]
We tackle the communication efficiency challenge of learning kernelized contextual bandits in a distributed setting.
We consider non-linear reward mappings, by letting agents collaboratively search in a reproducing kernel Hilbert space.
We rigorously proved that our algorithm can attain sub-linear rate in both regret and communication cost.
arXiv Detail & Related papers (2022-06-10T01:39:15Z) - DisPFL: Towards Communication-Efficient Personalized Federated Learning
via Decentralized Sparse Training [84.81043932706375]
We propose a novel personalized federated learning framework in a decentralized (peer-to-peer) communication protocol named Dis-PFL.
Dis-PFL employs personalized sparse masks to customize sparse local models on the edge.
We demonstrate that our method can easily adapt to heterogeneous local clients with varying computation complexities.
arXiv Detail & Related papers (2022-06-01T02:20:57Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Asynchronous Upper Confidence Bound Algorithms for Federated Linear
Bandits [35.47147821038291]
We propose a general framework with asynchronous model update and communication for a collection of homogeneous clients and heterogeneous clients.
Rigorous theoretical analysis is provided about the regret and communication cost under this distributed learning framework.
arXiv Detail & Related papers (2021-10-04T14:01:32Z) - Accelerating Federated Edge Learning via Optimized Probabilistic Device
Scheduling [57.271494741212166]
This paper formulates and solves the communication time minimization problem.
It is found that the optimized policy gradually turns its priority from suppressing the remaining communication rounds to reducing per-round latency as the training process evolves.
The effectiveness of the proposed scheme is demonstrated via a use case on collaborative 3D objective detection in autonomous driving.
arXiv Detail & Related papers (2021-07-24T11:39:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.