OFedQIT: Communication-Efficient Online Federated Learning via
Quantization and Intermittent Transmission
- URL: http://arxiv.org/abs/2205.06491v1
- Date: Fri, 13 May 2022 07:46:43 GMT
- Title: OFedQIT: Communication-Efficient Online Federated Learning via
Quantization and Intermittent Transmission
- Authors: Jonghwan Park, Dohyeok Kwon, Songnam hong
- Abstract summary: Online federated learning (OFL) is a promising framework to collaboratively learn a sequence of non-linear functions (or models) from distributed streaming data.
We propose a communication-efficient OFL algorithm (named OFedQIT) by means of a quantization and an intermittent transmission.
Our analysis reveals that OFedQIT successfully addresses the drawbacks of OFedAvg while maintaining superior learning accuracy.
- Score: 7.6058140480517356
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Online federated learning (OFL) is a promising framework to collaboratively
learn a sequence of non-linear functions (or models) from distributed streaming
data incoming to multiple clients while keeping the privacy of their local
data. In this framework, we first construct a vanilla method (named OFedAvg) by
incorporating online gradient descent (OGD) into the de facto aggregation
method (named FedAvg). Despite its optimal asymptotic performance, OFedAvg
suffers from heavy communication overhead and long learning delay. To tackle
these shortcomings, we propose a communication-efficient OFL algorithm (named
OFedQIT) by means of a stochastic quantization and an intermittent
transmission. Our major contribution is to theoretically prove that OFedQIT
over $T$ time slots can achieve an optimal sublinear regret bound
$\mathcal{O}(\sqrt{T})$ for any real data (including non-IID data) while
significantly reducing the communication overhead. Furthermore, this optimality
is still guaranteed even when a small fraction of clients (having faster
processing time and high-quality communication channel) in a network are
participated at once. Our analysis reveals that OFedQIT successfully addresses
the drawbacks of OFedAvg while maintaining superior learning accuracy.
Experiments with real datasets demonstrate the effectiveness of our algorithm
on various online classification and regression tasks.
Related papers
- Hyperdimensional Computing Empowered Federated Foundation Model over Wireless Networks for Metaverse [56.384390765357004]
We propose an integrated federated split learning and hyperdimensional computing framework for emerging foundation models.
This novel approach reduces communication costs, computation load, and privacy risks, making it suitable for resource-constrained edge devices in the Metaverse.
arXiv Detail & Related papers (2024-08-26T17:03:14Z) - Asynchronous Federated Stochastic Optimization for Heterogeneous Objectives Under Arbitrary Delays [0.0]
Federated learning (FL) was recently proposed to securely train models with data held over multiple locations ("clients")
Two major challenges hindering the performance of FL algorithms are long training times caused by straggling clients, and a decline in model accuracy under non-iid local data distributions ("client drift")
We propose and analyze Asynchronous Exact Averaging (AREA), a new (sub)gradient algorithm that utilizes communication to speed up convergence and enhance scalability, and employs client memory to correct the client drift caused by variations in client update frequencies.
arXiv Detail & Related papers (2024-05-16T14:22:49Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Gradient Sparsification for Efficient Wireless Federated Learning with
Differential Privacy [25.763777765222358]
Federated learning (FL) enables distributed clients to collaboratively train a machine learning model without sharing raw data with each other.
As the model size grows, the training latency due to limited transmission bandwidth and private information degrades while using differential privacy (DP) protection.
We propose sparsification empowered FL framework wireless channels, in over to improve training efficiency without sacrificing convergence performance.
arXiv Detail & Related papers (2023-04-09T05:21:15Z) - FLSTRA: Federated Learning in Stratosphere [22.313423693397556]
A high altitude platform station facilitates a number of terrestrial clients to collaboratively learn a global model without the training data.
We develop a joint client selection and resource allocation algorithm for uplink and downlink to minimize the FL delay.
Second, we propose a communication and resource-aware algorithm to achieve the target FL accuracy while deriving an upper bound for its convergence.
arXiv Detail & Related papers (2023-02-01T00:52:55Z) - TCT: Convexifying Federated Learning using Bootstrapped Neural Tangent
Kernels [141.29156234353133]
State-of-the-art convex learning methods can perform far worse than their centralized counterparts when clients have dissimilar data distributions.
We show this disparity can largely be attributed to challenges presented by non-NISTity.
We propose a Train-Convexify neural network (TCT) procedure to sidestep this issue.
arXiv Detail & Related papers (2022-07-13T16:58:22Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - CosSGD: Nonlinear Quantization for Communication-efficient Federated
Learning [62.65937719264881]
Federated learning facilitates learning across clients without transferring local data on these clients to a central server.
We propose a nonlinear quantization for compressed gradient descent, which can be easily utilized in federated learning.
Our system significantly reduces the communication cost by up to three orders of magnitude, while maintaining convergence and accuracy of the training process.
arXiv Detail & Related papers (2020-12-15T12:20:28Z) - Coded Computing for Federated Learning at the Edge [3.385874614913973]
Federated Learning (FL) enables training a global model from data generated locally at the client nodes, without moving client data to a centralized server.
Recent work proposes to mitigate stragglers and speed up training for linear regression tasks by assigning redundant computations at the MEC server.
We develop CodedFedL that addresses the difficult task of extending CFL to distributed non-linear regression and classification problems with multioutput labels.
arXiv Detail & Related papers (2020-07-07T08:20:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.