FLARE: A New Federated Learning Framework with Adjustable Learning Rates over Resource-Constrained Wireless Networks
- URL: http://arxiv.org/abs/2404.14811v1
- Date: Tue, 23 Apr 2024 07:48:17 GMT
- Title: FLARE: A New Federated Learning Framework with Adjustable Learning Rates over Resource-Constrained Wireless Networks
- Authors: Bingnan Xiao, Jingjing Zhang, Wei Ni, Xin Wang,
- Abstract summary: Wireless federated learning (WFL) suffers from heterogeneity prevailing in the data distributions, computing powers, and channel conditions.
This paper presents a new idea with Federated Learning Adjusted leaning ratE (FLR ratE)
Experiments that FLARE consistently outperforms the baselines.
- Score: 20.048146776405005
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Wireless federated learning (WFL) suffers from heterogeneity prevailing in the data distributions, computing powers, and channel conditions of participating devices. This paper presents a new Federated Learning with Adjusted leaRning ratE (FLARE) framework to mitigate the impact of the heterogeneity. The key idea is to allow the participating devices to adjust their individual learning rates and local training iterations, adapting to their instantaneous computing powers. The convergence upper bound of FLARE is established rigorously under a general setting with non-convex models in the presence of non-i.i.d. datasets and imbalanced computing powers. By minimizing the upper bound, we further optimize the scheduling of FLARE to exploit the channel heterogeneity. A nested problem structure is revealed to facilitate iteratively allocating the bandwidth with binary search and selecting devices with a new greedy method. A linear problem structure is also identified and a low-complexity linear programming scheduling policy is designed when training models have large Lipschitz constants. Experiments demonstrate that FLARE consistently outperforms the baselines in test accuracy, and converges much faster with the proposed scheduling policy.
Related papers
- Over-the-Air Federated Learning and Optimization [52.5188988624998]
We focus on Federated learning (FL) via edge-the-air computation (AirComp)
We describe the convergence of AirComp-based FedAvg (AirFedAvg) algorithms under both convex and non- convex settings.
For different types of local updates that can be transmitted by edge devices (i.e., model, gradient, model difference), we reveal that transmitting in AirFedAvg may cause an aggregation error.
In addition, we consider more practical signal processing schemes to improve the communication efficiency and extend the convergence analysis to different forms of model aggregation error caused by these signal processing schemes.
arXiv Detail & Related papers (2023-10-16T05:49:28Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Analysis and Optimization of Wireless Federated Learning with Data
Heterogeneity [72.85248553787538]
This paper focuses on performance analysis and optimization for wireless FL, considering data heterogeneity, combined with wireless resource allocation.
We formulate the loss function minimization problem, under constraints on long-term energy consumption and latency, and jointly optimize client scheduling, resource allocation, and the number of local training epochs (CRE)
Experiments on real-world datasets demonstrate that the proposed algorithm outperforms other benchmarks in terms of the learning accuracy and energy consumption.
arXiv Detail & Related papers (2023-08-04T04:18:01Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - Communication-Efficient Device Scheduling for Federated Learning Using
Stochastic Optimization [26.559267845906746]
Time learning (FL) is a useful tool in distributed machine learning that utilizes users' local datasets in a privacy-preserving manner.
In this paper, we provide a novel convergence analysis non-efficient convergence bound algorithm.
We also develop a new selection and power allocation algorithm that minimizes a function of the convergence bound and the average communication under a power constraint.
arXiv Detail & Related papers (2022-01-19T23:25:24Z) - Unit-Modulus Wireless Federated Learning Via Penalty Alternating
Minimization [64.76619508293966]
Wireless federated learning (FL) is an emerging machine learning paradigm that trains a global parametric model from distributed datasets via wireless communications.
This paper proposes a wireless FL framework, which uploads local model parameters and computes global model parameters via wireless communications.
arXiv Detail & Related papers (2021-08-31T08:19:54Z) - Fast Federated Learning in the Presence of Arbitrary Device
Unavailability [26.368873771739715]
Federated Learning (FL) coordinates heterogeneous devices to collaboratively train a shared model while preserving user privacy.
One challenge arises when devices drop out of the training process beyond the central server.
We propose Im Federated Apatientaging (MIFA) to solve this problem.
arXiv Detail & Related papers (2021-06-08T07:46:31Z) - Fast-Convergent Federated Learning [82.32029953209542]
Federated learning is a promising solution for distributing machine learning tasks through modern networks of mobile devices.
We propose a fast-convergent federated learning algorithm, called FOLB, which performs intelligent sampling of devices in each round of model training.
arXiv Detail & Related papers (2020-07-26T14:37:51Z) - Joint Device Scheduling and Resource Allocation for Latency Constrained
Wireless Federated Learning [26.813145949399427]
In federated learning (FL), devices upload their local model updates via wireless channels.
We propose a joint device scheduling and resource allocation policy to maximize the model accuracy.
Experiments show that the proposed policy outperforms state-of-the-art scheduling policies.
arXiv Detail & Related papers (2020-07-14T16:46:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.