A-LAQ: Adaptive Lazily Aggregated Quantized Gradient
- URL: http://arxiv.org/abs/2210.17474v1
- Date: Mon, 31 Oct 2022 16:59:58 GMT
- Title: A-LAQ: Adaptive Lazily Aggregated Quantized Gradient
- Authors: Afsaneh Mahmoudi, Jos\'e Mairton Barros Da Silva J\'unior, Hossein S.
Ghadikolaei, Carlo Fischione
- Abstract summary: Federated Learning (FL) plays a prominent role in solving machine learning problems with data distributed across clients.
In FL, to reduce the communication overhead of data between clients and the server, each client communicates the local FL parameters instead of the local data.
This paper proposes Adaptive Lazily Aggregated Quantized Gradient (A-LAQ), which significantly extends LAQ by assigning an adaptive number of communication bits during the FL iterations.
- Score: 11.990047476303252
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) plays a prominent role in solving machine learning
problems with data distributed across clients. In FL, to reduce the
communication overhead of data between clients and the server, each client
communicates the local FL parameters instead of the local data. However, when a
wireless network connects clients and the server, the communication resource
limitations of the clients may prevent completing the training of the FL
iterations. Therefore, communication-efficient variants of FL have been widely
investigated. Lazily Aggregated Quantized Gradient (LAQ) is one of the
promising communication-efficient approaches to lower resource usage in FL.
However, LAQ assigns a fixed number of bits for all iterations, which may be
communication-inefficient when the number of iterations is medium to high or
convergence is approaching. This paper proposes Adaptive Lazily Aggregated
Quantized Gradient (A-LAQ), which is a method that significantly extends LAQ by
assigning an adaptive number of communication bits during the FL iterations. We
train FL in an energy-constraint condition and investigate the convergence
analysis for A-LAQ. The experimental results highlight that A-LAQ outperforms
LAQ by up to a $50$% reduction in spent communication energy and an $11$%
increase in test accuracy.
Related papers
- Over-the-Air Fair Federated Learning via Multi-Objective Optimization [52.295563400314094]
We propose an over-the-air fair federated learning algorithm (OTA-FFL) to train fair FL models.
Experiments demonstrate the superiority of OTA-FFL in achieving fairness and robust performance.
arXiv Detail & Related papers (2025-01-06T21:16:51Z) - Accelerating Energy-Efficient Federated Learning in Cell-Free Networks with Adaptive Quantization [45.99908087352264]
Federated Learning (FL) enables clients to share learning parameters instead of local data, reducing communication overhead.
Traditional wireless networks face latency challenges with FL.
We propose an energy-efficient, low-latency FL framework featuring optimized uplink power allocation for seamless client-server collaboration.
arXiv Detail & Related papers (2024-12-30T08:10:21Z) - Adaptive Quantization Resolution and Power Control for Federated Learning over Cell-free Networks [41.23236059700041]
Federated learning (FL) is a distributed learning framework where users train a global model by exchanging local model updates with a server instead of raw datasets.
Cell-free massive multiple-input multipleoutput (CFmMIMO) is a promising solution to serve numerous users on the same time/frequency resource with similar rates.
In this paper, we co-optimize the physical layer with the FL application to mitigate the straggler effect.
arXiv Detail & Related papers (2024-12-14T16:08:05Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
To optimize the pruning process itself, only thresholds are communicated between a server and clients instead of parameters.
Global thresholds are used to update model parameters by extracting aggregated parameter importance.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - FLrce: Resource-Efficient Federated Learning with Early-Stopping Strategy [7.963276533979389]
Federated Learning (FL) achieves great popularity in the Internet of Things (IoT)
We present FLrce, an efficient FL framework with a relationship-based client selection and early-stopping strategy.
Experiment results show that, compared with existing efficient FL frameworks, FLrce improves the computation and communication efficiency by at least 30% and 43% respectively.
arXiv Detail & Related papers (2023-10-15T10:13:44Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Joint Age-based Client Selection and Resource Allocation for
Communication-Efficient Federated Learning over NOMA Networks [8.030674576024952]
In federated learning (FL), distributed clients can collaboratively train a shared global model while retaining their own training data locally.
In this paper, a joint optimization problem of client selection and resource allocation is formulated, aiming to minimize the total time consumption of each round in FL over a non-orthogonal multiple access (NOMA) enabled wireless network.
In addition, a server-side artificial neural network (ANN) is proposed to predict the FL models of clients who are not selected at each round to further improve FL performance.
arXiv Detail & Related papers (2023-04-18T13:58:16Z) - FLSTRA: Federated Learning in Stratosphere [22.313423693397556]
A high altitude platform station facilitates a number of terrestrial clients to collaboratively learn a global model without the training data.
We develop a joint client selection and resource allocation algorithm for uplink and downlink to minimize the FL delay.
Second, we propose a communication and resource-aware algorithm to achieve the target FL accuracy while deriving an upper bound for its convergence.
arXiv Detail & Related papers (2023-02-01T00:52:55Z) - Federated Learning over Wireless IoT Networks with Optimized
Communication and Resources [98.18365881575805]
Federated learning (FL) as a paradigm of collaborative learning techniques has obtained increasing research attention.
It is of interest to investigate fast responding and accurate FL schemes over wireless systems.
We show that the proposed communication-efficient federated learning framework converges at a strong linear rate.
arXiv Detail & Related papers (2021-10-22T13:25:57Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.