AQUILA: Communication Efficient Federated Learning with Adaptive
Quantization in Device Selection Strategy
- URL: http://arxiv.org/abs/2308.00258v2
- Date: Wed, 4 Oct 2023 13:27:37 GMT
- Title: AQUILA: Communication Efficient Federated Learning with Adaptive
Quantization in Device Selection Strategy
- Authors: Zihao Zhao, Yuzhu Mao, Zhenpeng Shi, Yang Liu, Tian Lan, Wenbo Ding,
and Xiao-Ping Zhang
- Abstract summary: This paper introduces AQUILA (adaptive quantization in device selection strategy), a novel adaptive framework devised to handle these issues.
AQUILA integrates a sophisticated device selection method that prioritizes the quality and usefulness of device updates.
Our experiments demonstrate that AQUILA significantly decreases communication costs compared to existing methods.
- Score: 27.443439653087662
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The widespread adoption of Federated Learning (FL), a privacy-preserving
distributed learning methodology, has been impeded by the challenge of high
communication overheads, typically arising from the transmission of large-scale
models. Existing adaptive quantization methods, designed to mitigate these
overheads, operate under the impractical assumption of uniform device
participation in every training round. Additionally, these methods are limited
in their adaptability due to the necessity of manual quantization level
selection and often overlook biases inherent in local devices' data, thereby
affecting the robustness of the global model. In response, this paper
introduces AQUILA (adaptive quantization in device selection strategy), a novel
adaptive framework devised to effectively handle these issues, enhancing the
efficiency and robustness of FL. AQUILA integrates a sophisticated device
selection method that prioritizes the quality and usefulness of device updates.
Utilizing the exact global model stored by devices, it enables a more precise
device selection criterion, reduces model deviation, and limits the need for
hyperparameter adjustments. Furthermore, AQUILA presents an innovative
quantization criterion, optimized to improve communication efficiency while
assuring model convergence. Our experiments demonstrate that AQUILA
significantly decreases communication costs compared to existing methods, while
maintaining comparable model performance across diverse non-homogeneous FL
settings, such as Non-IID data and heterogeneous model architectures.
Related papers
- Client Contribution Normalization for Enhanced Federated Learning [4.726250115737579]
Mobile devices, including smartphones and laptops, generate decentralized and heterogeneous data.
Federated Learning (FL) offers a promising alternative by enabling collaborative training of a global model across decentralized devices without data sharing.
This paper focuses on data-dependent heterogeneity in FL and proposes a novel approach leveraging mean latent representations extracted from locally trained models.
arXiv Detail & Related papers (2024-11-10T04:03:09Z) - Prioritizing Modalities: Flexible Importance Scheduling in Federated Multimodal Learning [5.421492821020181]
Federated Learning (FL) is a distributed machine learning approach that enables devices to collaboratively train models without sharing their local data.
Applying FL to real-world data presents challenges, particularly as most existing FL research focuses on unimodal data.
We propose FlexMod, a novel approach to enhance computational efficiency in MFL by adaptively allocating training resources for each modality encoder.
arXiv Detail & Related papers (2024-08-13T01:14:27Z) - Stragglers-Aware Low-Latency Synchronous Federated Learning via Layer-Wise Model Updates [71.81037644563217]
Synchronous federated learning (FL) is a popular paradigm for collaborative edge learning.
As some of the devices may have limited computational resources and varying availability, FL latency is highly sensitive to stragglers.
We propose straggler-aware layer-wise federated learning (SALF) that leverages the optimization procedure of NNs via backpropagation to update the global model in a layer-wise fashion.
arXiv Detail & Related papers (2024-03-27T09:14:36Z) - Efficient Language Model Architectures for Differentially Private
Federated Learning [21.280600854272716]
Cross-device federated learning (FL) is a technique that trains a model on data distributed across typically millions of edge devices without data leaving the devices.
In centralized training of language models, adaptives are preferred as they offer improved stability and performance.
We propose a scale-in Coupled Input Forget Gate (SI CIFG) recurrent network by modifying the sigmoid and tanh activations in neural recurrent cell.
arXiv Detail & Related papers (2024-03-12T22:21:48Z) - AdaptiveFL: Adaptive Heterogeneous Federated Learning for Resource-Constrained AIoT Systems [25.0282475069725]
Federated Learning (FL) is promising to enable collaborative learning among Artificial Intelligence of Things (AIoT) devices.
This paper introduces an effective FL approach named AdaptiveFL based on a novel fine-grained width-wise model pruning strategy.
We show that AdaptiveFL can achieve up to 16.83% inference improvements for both IID and non-IID scenarios.
arXiv Detail & Related papers (2023-11-22T05:17:42Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Faster Adaptive Federated Learning [84.38913517122619]
Federated learning has attracted increasing attention with the emergence of distributed data.
In this paper, we propose an efficient adaptive algorithm (i.e., FAFED) based on momentum-based variance reduced technique in cross-silo FL.
arXiv Detail & Related papers (2022-12-02T05:07:50Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - FedCAT: Towards Accurate Federated Learning via Device Concatenation [4.416919766772866]
Federated Learning (FL) enables all the involved devices to train a global model collaboratively without exposing their local data privacy.
For non-IID scenarios, the classification accuracy of FL models decreases drastically due to the weight divergence caused by data heterogeneity.
We introduce a novel FL approach named Fed-Cat that can achieve high model accuracy based on our proposed device selection strategy and device concatenation-based local training method.
arXiv Detail & Related papers (2022-02-23T10:08:43Z) - Optimization-driven Machine Learning for Intelligent Reflecting Surfaces
Assisted Wireless Networks [82.33619654835348]
Intelligent surface (IRS) has been employed to reshape the wireless channels by controlling individual scattering elements' phase shifts.
Due to the large size of scattering elements, the passive beamforming is typically challenged by the high computational complexity.
In this article, we focus on machine learning (ML) approaches for performance in IRS-assisted wireless networks.
arXiv Detail & Related papers (2020-08-29T08:39:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.