Asynchronous Federated Learning with Bidirectional Quantized
Communications and Buffered Aggregation
- URL: http://arxiv.org/abs/2308.00263v1
- Date: Tue, 1 Aug 2023 03:50:58 GMT
- Title: Asynchronous Federated Learning with Bidirectional Quantized
Communications and Buffered Aggregation
- Authors: Tomas Ortega, Hamid Jafarkhani
- Abstract summary: Asynchronous Federated Learning with Buffered Aggregation (FedBuff) is a state-of-the-art algorithm known for its efficiency and high scalability.
We present a new algorithm (QAFeL) with a quantization scheme that establishes a shared "hidden" state between the server and clients to avoid the error propagation caused by direct quantization.
- Score: 39.057968279167966
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Asynchronous Federated Learning with Buffered Aggregation (FedBuff) is a
state-of-the-art algorithm known for its efficiency and high scalability.
However, it has a high communication cost, which has not been examined with
quantized communications. To tackle this problem, we present a new algorithm
(QAFeL), with a quantization scheme that establishes a shared "hidden" state
between the server and clients to avoid the error propagation caused by direct
quantization. This approach allows for high precision while significantly
reducing the data transmitted during client-server interactions. We provide
theoretical convergence guarantees for QAFeL and corroborate our analysis with
experiments on a standard benchmark.
Related papers
- Towards Resource-Efficient Federated Learning in Industrial IoT for Multivariate Time Series Analysis [50.18156030818883]
Anomaly and missing data constitute a thorny problem in industrial applications.
Deep learning enabled anomaly detection has emerged as a critical direction.
The data collected in edge devices contain user privacy.
arXiv Detail & Related papers (2024-11-06T15:38:31Z) - Boosting the Performance of Decentralized Federated Learning via Catalyst Acceleration [66.43954501171292]
We introduce Catalyst Acceleration and propose an acceleration Decentralized Federated Learning algorithm called DFedCata.
DFedCata consists of two main components: the Moreau envelope function, which addresses parameter inconsistencies, and Nesterov's extrapolation step, which accelerates the aggregation phase.
Empirically, we demonstrate the advantages of the proposed algorithm in both convergence speed and generalization performance on CIFAR10/100 with various non-iid data distributions.
arXiv Detail & Related papers (2024-10-09T06:17:16Z) - Quantized and Asynchronous Federated Learning [22.40154714677385]
We develop a novel scheme, Quantized Federated AsynchronousQAL, to deal with the communication bottleneck.
We prove that QAL achieves $mathtcalqr$dic convergence without requiring uniform client arrivals.
We validate our theoretical findings by using standard benchmarks.
arXiv Detail & Related papers (2024-09-30T21:22:41Z) - Rate-Constrained Quantization for Communication-Efficient Federated Learning [5.632231145349047]
We develop a novel quantized FL framework, called textbfrate-textbfconstrained textbffederated learning (RC-FED)
We formulate this scheme, as a joint optimization in which the quantization distortion is minimized while the rate of encoded gradients is kept below a target threshold.
We analyze the convergence behavior of RC-FED, and show its superior performance against baseline quantized FL schemes on several datasets.
arXiv Detail & Related papers (2024-09-10T08:22:01Z) - Semantic Communication for Cooperative Perception using HARQ [51.148203799109304]
We leverage an importance map to distill critical semantic information, introducing a cooperative perception semantic communication framework.
To counter the challenges posed by time-varying multipath fading, our approach incorporates the use of frequency-division multiplexing (OFDM) along with channel estimation and equalization strategies.
We introduce a novel semantic error detection method that is integrated with our semantic communication framework in the spirit of hybrid automatic repeated request (HARQ)
arXiv Detail & Related papers (2024-08-29T08:53:26Z) - Asynchronous Federated Stochastic Optimization for Heterogeneous Objectives Under Arbitrary Delays [0.0]
Federated learning (FL) was recently proposed to securely train models with data held over multiple locations ("clients")
Two major challenges hindering the performance of FL algorithms are long training times caused by straggling clients, and a decline in model accuracy under non-iid local data distributions ("client drift")
We propose and analyze Asynchronous Exact Averaging (AREA), a new (sub)gradient algorithm that utilizes communication to speed up convergence and enhance scalability, and employs client memory to correct the client drift caused by variations in client update frequencies.
arXiv Detail & Related papers (2024-05-16T14:22:49Z) - Communication-Efficient Federated Learning With Data and Client
Heterogeneity [22.432529149142976]
Federated Learning (FL) enables large-scale distributed training of machine learning models.
executing FL at scale comes with inherent practical challenges.
We present the first variant of the classic federated averaging (FedAvg) algorithm.
arXiv Detail & Related papers (2022-06-20T22:39:39Z) - Wireless Quantized Federated Learning: A Joint Computation and
Communication Design [36.35684767732552]
In this paper, we aim to minimize the total convergence time of FL, by quantizing the local model parameters prior to uplink transmission.
We jointly optimize the computing, communication resources and number of quantization bits, in order to guarantee minimized convergence time across all global rounds.
arXiv Detail & Related papers (2022-03-11T12:30:08Z) - Adaptive Quantization of Model Updates for Communication-Efficient
Federated Learning [75.45968495410047]
Communication of model updates between client nodes and the central aggregating server is a major bottleneck in federated learning.
Gradient quantization is an effective way of reducing the number of bits required to communicate each model update.
We propose an adaptive quantization strategy called AdaFL that aims to achieve communication efficiency as well as a low error floor.
arXiv Detail & Related papers (2021-02-08T19:14:21Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.