FedAVO: Improving Communication Efficiency in Federated Learning with
African Vultures Optimizer
- URL: http://arxiv.org/abs/2305.01154v3
- Date: Sat, 9 Dec 2023 04:08:42 GMT
- Title: FedAVO: Improving Communication Efficiency in Federated Learning with
African Vultures Optimizer
- Authors: Md Zarif Hossain, Ahmed Imteaj
- Abstract summary: Federated Learning (FL) is a distributed machine learning technique.
In this paper, we introduce FedAVO, a novel FL algorithm that enhances communication effectiveness.
We show that FedAVO achieves significant improvement in terms of model accuracy and communication round.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL), a distributed machine learning technique has
recently experienced tremendous growth in popularity due to its emphasis on
user data privacy. However, the distributed computations of FL can result in
constrained communication and drawn-out learning processes, necessitating the
client-server communication cost optimization. The ratio of chosen clients and
the quantity of local training passes are two hyperparameters that have a
significant impact on FL performance. Due to different training preferences
across various applications, it can be difficult for FL practitioners to
manually select such hyperparameters. In our research paper, we introduce
FedAVO, a novel FL algorithm that enhances communication effectiveness by
selecting the best hyperparameters leveraging the African Vulture Optimizer
(AVO). Our research demonstrates that the communication costs associated with
FL operations can be substantially reduced by adopting AVO for FL
hyperparameter adjustment. Through extensive evaluations of FedAVO on benchmark
datasets, we show that FedAVO achieves significant improvement in terms of
model accuracy and communication round, particularly with realistic cases of
Non-IID datasets. Our extensive evaluation of the FedAVO algorithm identifies
the optimal hyperparameters that are appropriately fitted for the benchmark
datasets, eventually increasing global model accuracy by 6% in comparison to
the state-of-the-art FL algorithms (such as FedAvg, FedProx, FedPSO, etc.).
Related papers
- SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - An Aggregation-Free Federated Learning for Tackling Data Heterogeneity [50.44021981013037]
Federated Learning (FL) relies on the effectiveness of utilizing knowledge from distributed datasets.
Traditional FL methods adopt an aggregate-then-adapt framework, where clients update local models based on a global model aggregated by the server from the previous training round.
We introduce FedAF, a novel aggregation-free FL algorithm.
arXiv Detail & Related papers (2024-04-29T05:55:23Z) - Federated Learning of Large Language Models with Parameter-Efficient
Prompt Tuning and Adaptive Optimization [71.87335804334616]
Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data.
The training process of Large Language Models (LLMs) generally incurs the update of significant parameters.
This paper proposes an efficient partial prompt tuning approach to improve performance and efficiency simultaneously.
arXiv Detail & Related papers (2023-10-23T16:37:59Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - A Multi-agent Reinforcement Learning Approach for Efficient Client
Selection in Federated Learning [17.55163940659976]
Federated learning (FL) is a training technique that enables client devices to jointly learn a shared model.
We design an efficient FL framework which jointly optimize model accuracy, processing latency and communication efficiency.
Experiments show that FedMarl can significantly improve model accuracy with much lower processing latency and communication cost.
arXiv Detail & Related papers (2022-01-09T05:55:17Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Evaluation of Hyperparameter-Optimization Approaches in an Industrial
Federated Learning System [0.2609784101826761]
Federated Learning (FL) decouples model training from the need for direct access to the data.
In this work, we investigated the impact of different hyperparameter optimization approaches in an FL system.
We implemented these approaches based on grid search and Bayesian optimization and evaluated the algorithms on the MNIST data set and on the Internet of Things (IoT) sensor based industrial data set.
arXiv Detail & Related papers (2021-10-15T17:01:40Z) - Accelerating Federated Learning with a Global Biased Optimiser [16.69005478209394]
Federated Learning (FL) is a recent development in the field of machine learning that collaboratively trains models without the training data leaving client devices.
We propose a novel, generalised approach for applying adaptive optimisation techniques to FL with the Federated Global Biased Optimiser (FedGBO) algorithm.
FedGBO accelerates FL by applying a set of global biased optimiser values during the local training phase of FL, which helps to reduce client-drift' from non-IID data.
arXiv Detail & Related papers (2021-08-20T12:08:44Z) - Joint Optimization of Communications and Federated Learning Over the Air [32.14738452396869]
Federated learning (FL) is an attractive paradigm for making use of rich distributed data while protecting data privacy.
In this paper, we study joint optimization of communications and FL based on analog aggregation transmission in realistic wireless networks.
arXiv Detail & Related papers (2021-04-08T03:38:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.