A-LAQ: Adaptive Lazily Aggregated Quantized Gradient
- URL: http://arxiv.org/abs/2210.17474v1
- Date: Mon, 31 Oct 2022 16:59:58 GMT
- Title: A-LAQ: Adaptive Lazily Aggregated Quantized Gradient
- Authors: Afsaneh Mahmoudi, Jos\'e Mairton Barros Da Silva J\'unior, Hossein S.
Ghadikolaei, Carlo Fischione
- Abstract summary: Federated Learning (FL) plays a prominent role in solving machine learning problems with data distributed across clients.
In FL, to reduce the communication overhead of data between clients and the server, each client communicates the local FL parameters instead of the local data.
This paper proposes Adaptive Lazily Aggregated Quantized Gradient (A-LAQ), which significantly extends LAQ by assigning an adaptive number of communication bits during the FL iterations.
- Score: 11.990047476303252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) plays a prominent role in solving machine learning
problems with data distributed across clients. In FL, to reduce the
communication overhead of data between clients and the server, each client
communicates the local FL parameters instead of the local data. However, when a
wireless network connects clients and the server, the communication resource
limitations of the clients may prevent completing the training of the FL
iterations. Therefore, communication-efficient variants of FL have been widely
investigated. Lazily Aggregated Quantized Gradient (LAQ) is one of the
promising communication-efficient approaches to lower resource usage in FL.
However, LAQ assigns a fixed number of bits for all iterations, which may be
communication-inefficient when the number of iterations is medium to high or
convergence is approaching. This paper proposes Adaptive Lazily Aggregated
Quantized Gradient (A-LAQ), which is a method that significantly extends LAQ by
assigning an adaptive number of communication bits during the FL iterations. We
train FL in an energy-constraint condition and investigate the convergence
analysis for A-LAQ. The experimental results highlight that A-LAQ outperforms
LAQ by up to a $50$% reduction in spent communication energy and an $11$%
increase in test accuracy.
Related papers
- FedScalar: A Communication efficient Federated Learning [0.0]
Federated learning (FL) has gained considerable popularity for distributed machine learning.
emphFedScalar enables agents to communicate updates using a single scalar.
arXiv Detail & Related papers (2024-10-03T07:06:49Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - FLrce: Resource-Efficient Federated Learning with Early-Stopping Strategy [7.963276533979389]
Federated Learning (FL) achieves great popularity in the Internet of Things (IoT)
We present FLrce, an efficient FL framework with a relationship-based client selection and early-stopping strategy.
Experiment results show that, compared with existing efficient FL frameworks, FLrce improves the computation and communication efficiency by at least 30% and 43% respectively.
arXiv Detail & Related papers (2023-10-15T10:13:44Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Joint Age-based Client Selection and Resource Allocation for
Communication-Efficient Federated Learning over NOMA Networks [8.030674576024952]
In federated learning (FL), distributed clients can collaboratively train a shared global model while retaining their own training data locally.
In this paper, a joint optimization problem of client selection and resource allocation is formulated, aiming to minimize the total time consumption of each round in FL over a non-orthogonal multiple access (NOMA) enabled wireless network.
In addition, a server-side artificial neural network (ANN) is proposed to predict the FL models of clients who are not selected at each round to further improve FL performance.
arXiv Detail & Related papers (2023-04-18T13:58:16Z) - FLSTRA: Federated Learning in Stratosphere [22.313423693397556]
A high altitude platform station facilitates a number of terrestrial clients to collaboratively learn a global model without the training data.
We develop a joint client selection and resource allocation algorithm for uplink and downlink to minimize the FL delay.
Second, we propose a communication and resource-aware algorithm to achieve the target FL accuracy while deriving an upper bound for its convergence.
arXiv Detail & Related papers (2023-02-01T00:52:55Z) - Resource Allocation for Compression-aided Federated Learning with High
Distortion Rate [3.7530276852356645]
We formulate an optimization-aided FL problem between the distortion rate, number of participating IoT devices, and convergence rate.
By actively controlling participating IoT devices, we can avoid the training divergence of compression-aided FL while maintaining the communication efficiency.
arXiv Detail & Related papers (2022-06-02T05:00:37Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z) - Multi-Armed Bandit Based Client Scheduling for Federated Learning [91.91224642616882]
federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy.
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
This work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients.
arXiv Detail & Related papers (2020-07-05T12:32:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.