Federated Dropout -- A Simple Approach for Enabling Federated Learning
on Resource Constrained Devices
- URL: http://arxiv.org/abs/2109.15258v1
- Date: Thu, 30 Sep 2021 16:52:13 GMT
- Title: Federated Dropout -- A Simple Approach for Enabling Federated Learning
on Resource Constrained Devices
- Authors: Dingzhu Wen, Ki-Jun Jeon, and Kaibin Huang
- Abstract summary: Federated learning (FL) is a popular framework for training an AI model using distributed mobile data in a wireless network.
One main challenge confronting practical FL is that resource constrained devices struggle with the computation intensive task of updating a deep-neural network model.
To tackle the challenge, in this paper, a federated dropout (FedDrop) scheme is proposed building on the classic dropout scheme for random model pruning.
- Score: 40.69663094185572
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) is a popular framework for training an AI model using
distributed mobile data in a wireless network. It features data parallelism by
distributing the learning task to multiple edge devices while attempting to
preserve their local-data privacy. One main challenge confronting practical FL
is that resource constrained devices struggle with the computation intensive
task of updating of a deep-neural network model. To tackle the challenge, in
this paper, a federated dropout (FedDrop) scheme is proposed building on the
classic dropout scheme for random model pruning. Specifically, in each
iteration of the FL algorithm, several subnets are independently generated from
the global model at the server using dropout but with heterogeneous dropout
rates (i.e., parameter-pruning probabilities), each of which is adapted to the
state of an assigned channel. The subsets are downloaded to associated devices
for updating. Thereby, FdeDrop reduces both the communication overhead and
devices' computation loads compared with the conventional FL while
outperforming the latter in the case of overfitting and also the FL scheme with
uniform dropout (i.e., identical subsets).
Related papers
- FLARE: A New Federated Learning Framework with Adjustable Learning Rates over Resource-Constrained Wireless Networks [20.048146776405005]
Wireless federated learning (WFL) suffers from heterogeneity prevailing in the data distributions, computing powers, and channel conditions.
This paper presents a new idea with Federated Learning Adjusted leaning ratE (FLR ratE)
Experiments that FLARE consistently outperforms the baselines.
arXiv Detail & Related papers (2024-04-23T07:48:17Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - FLuID: Mitigating Stragglers in Federated Learning using Invariant
Dropout [1.8262547855491458]
Federated Learning allows machine learning models to train locally on individual mobile devices, synchronizing model updates via a shared server.
As a result, straggler devices with lower performance often dictate the overall training time in FL.
We introduce Invariant Dropout, a method that extracts a sub-model based on the weight update threshold.
We develop an adaptive training framework, Federated Learning using Invariant Dropout.
arXiv Detail & Related papers (2023-07-05T19:53:38Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Scheduling and Aggregation Design for Asynchronous Federated Learning
over Wireless Networks [56.91063444859008]
Federated Learning (FL) is a collaborative machine learning framework that combines on-device training and server-based aggregation.
We propose an asynchronous FL design with periodic aggregation to tackle the straggler issue in FL systems.
We show that an age-aware'' aggregation weighting design can significantly improve the learning performance in an asynchronous FL setting.
arXiv Detail & Related papers (2022-12-14T17:33:01Z) - Resource-Efficient and Delay-Aware Federated Learning Design under Edge
Heterogeneity [10.702853653891902]
Federated learning (FL) has emerged as a popular methodology for distributing machine learning across wireless edge devices.
In this work, we consider optimizing the tradeoff between model performance and resource utilization in FL.
Our proposed StoFedDelAv incorporates a localglobal model combiner into the FL computation step.
arXiv Detail & Related papers (2021-12-27T22:30:15Z) - Unit-Modulus Wireless Federated Learning Via Penalty Alternating
Minimization [64.76619508293966]
Wireless federated learning (FL) is an emerging machine learning paradigm that trains a global parametric model from distributed datasets via wireless communications.
This paper proposes a wireless FL framework, which uploads local model parameters and computes global model parameters via wireless communications.
arXiv Detail & Related papers (2021-08-31T08:19:54Z) - Adaptive Dynamic Pruning for Non-IID Federated Learning [3.8666113275834335]
Federated Learning(FL) has emerged as a new paradigm of training machine learning models without sacrificing data security and privacy.
We present an adaptive pruning scheme for edge devices in an FL system, which applies dataset-aware dynamic pruning for inference acceleration on Non-IID datasets.
arXiv Detail & Related papers (2021-06-13T05:27:43Z) - UVeQFed: Universal Vector Quantization for Federated Learning [179.06583469293386]
Federated learning (FL) is an emerging approach to train such learning models without requiring the users to share their possibly private labeled data.
In FL, each user trains its copy of the learning model locally. The server then collects the individual updates and aggregates them into a global model.
We show that combining universal vector quantization methods with FL yields a decentralized training system in which the compression of the trained models induces only a minimum distortion.
arXiv Detail & Related papers (2020-06-05T07:10:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.