Automatic Tuning of Federated Learning Hyper-Parameters from System
Perspective
- URL: http://arxiv.org/abs/2110.03061v1
- Date: Wed, 6 Oct 2021 20:43:25 GMT
- Title: Automatic Tuning of Federated Learning Hyper-Parameters from System
Perspective
- Authors: Huanle Zhang and Mi Zhang and Xin Liu and Prasant Mohapatra and
Michael DeLucia
- Abstract summary: Federated learning (FL) is a distributed model training paradigm that preserves clients' data privacy.
We propose FedTuning, an automatic FL hyper- parameter tuning algorithm tailored to applications' diverse system requirements of FL training.
FedTuning is lightweight and flexible, achieving an average of 41% improvement for different training preferences on time, computation, and communication.
- Score: 15.108050457914516
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a distributed model training paradigm that
preserves clients' data privacy. FL hyper-parameters significantly affect the
training overheads in terms of time, computation, and communication. However,
the current practice of manually selecting FL hyper-parameters puts a high
burden on FL practitioners since various applications prefer different training
preferences. In this paper, we propose FedTuning, an automatic FL
hyper-parameter tuning algorithm tailored to applications' diverse system
requirements of FL training. FedTuning is lightweight and flexible, achieving
an average of 41% improvement for different training preferences on time,
computation, and communication compared to fixed FL hyper-parameters. FedTuning
is available at https://github.com/dtczhl/FedTuning.
Related papers
- Hyper-parameter Optimization for Federated Learning with Step-wise Adaptive Mechanism [0.48342038441006796]
Federated Learning (FL) is a decentralized learning approach that protects sensitive information by utilizing local model parameters rather than sharing clients' raw datasets.
This paper investigates the deployment and integration of two lightweight Hyper- Optimization (HPO) tools, Raytune and Optuna, within the context of FL settings.
To this end, both local and global feedback mechanisms are integrated to limit the search space and expedite the HPO process.
arXiv Detail & Related papers (2024-11-19T05:49:00Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Federated Learning of Large Language Models with Parameter-Efficient
Prompt Tuning and Adaptive Optimization [71.87335804334616]
Federated learning (FL) is a promising paradigm to enable collaborative model training with decentralized data.
The training process of Large Language Models (LLMs) generally incurs the update of significant parameters.
This paper proposes an efficient partial prompt tuning approach to improve performance and efficiency simultaneously.
arXiv Detail & Related papers (2023-10-23T16:37:59Z) - FedAVO: Improving Communication Efficiency in Federated Learning with
African Vultures Optimizer [0.0]
Federated Learning (FL) is a distributed machine learning technique.
In this paper, we introduce FedAVO, a novel FL algorithm that enhances communication effectiveness.
We show that FedAVO achieves significant improvement in terms of model accuracy and communication round.
arXiv Detail & Related papers (2023-05-02T02:04:19Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Federated Learning Hyper-Parameter Tuning from a System Perspective [23.516484538620745]
Federated learning (FL) is a distributed model training paradigm that preserves clients' data privacy.
Current practice of manually selecting FL hyper- parameters imposes a heavy burden on FL practitioners.
We propose FedTune, an automatic FL hyper- parameter tuning algorithm tailored to applications' diverse system requirements.
arXiv Detail & Related papers (2022-11-24T15:15:28Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Fast Server Learning Rate Tuning for Coded Federated Dropout [3.9653673778225946]
Federated Dropout (FD) is a technique that improves the communication efficiency of a FL session.
We leverage coding theory to enhance FD by allowing a different sub-model to be used at each client.
For the EMNIST dataset, our mechanism achieves 99.6 % of the final accuracy of the no dropout case.
arXiv Detail & Related papers (2022-01-26T16:19:04Z) - User Scheduling for Federated Learning Through Over-the-Air Computation [22.853678584121862]
A new machine learning technique termed as federated learning (FL) aims to preserve data at the edge devices and to only exchange ML model parameters in the learning process.
FL not only reduces the communication needs but also helps to protect the local privacy.
AirComp is capable of computing while transmitting data by allowing multiple devices to send data simultaneously by using analog modulation.
arXiv Detail & Related papers (2021-08-05T23:58:15Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.