Federated Learning with Flexible Control
- URL: http://arxiv.org/abs/2212.08496v1
- Date: Fri, 16 Dec 2022 14:21:29 GMT
- Title: Federated Learning with Flexible Control
- Authors: Shiqiang Wang, Jake Perazzone, Mingyue Ji, Kevin S. Chan
- Abstract summary: Federated learning (FL) enables distributed model training from local data collected by users.
In distributed systems with constrained resources and potentially high dynamics, e.g., mobile edge networks, the efficiency of FL is an important problem.
We propose FlexFL - an FL algorithm with multiple options that can be adjusted flexibly.
- Score: 30.65854375019346
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated learning (FL) enables distributed model training from local data
collected by users. In distributed systems with constrained resources and
potentially high dynamics, e.g., mobile edge networks, the efficiency of FL is
an important problem. Existing works have separately considered different
configurations to make FL more efficient, such as infrequent transmission of
model updates, client subsampling, and compression of update vectors. However,
an important open problem is how to jointly apply and tune these control knobs
in a single FL algorithm, to achieve the best performance by allowing a high
degree of freedom in control decisions. In this paper, we address this problem
and propose FlexFL - an FL algorithm with multiple options that can be adjusted
flexibly. Our FlexFL algorithm allows both arbitrary rates of local computation
at clients and arbitrary amounts of communication between clients and the
server, making both the computation and communication resource consumption
adjustable. We prove a convergence upper bound of this algorithm. Based on this
result, we further propose a stochastic optimization formulation and algorithm
to determine the control decisions that (approximately) minimize the
convergence bound, while conforming to constraints related to resource
consumption. The advantage of our approach is also verified using experiments.
Related papers
- Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - Learner Referral for Cost-Effective Federated Learning Over Hierarchical
IoT Networks [21.76836812021954]
This paper aided federated selection (LRef-FedCS), communications resource, and local model accuracy (LMAO) methods.
Our proposed LRef-FedCS approach could achieve a good balance between high global accuracy and reducing cost.
arXiv Detail & Related papers (2023-07-19T13:33:43Z) - Adaptive Federated Pruning in Hierarchical Wireless Networks [69.6417645730093]
Federated Learning (FL) is a privacy-preserving distributed learning framework where a server aggregates models updated by multiple devices without accessing their private datasets.
In this paper, we introduce model pruning for HFL in wireless networks to reduce the neural network scale.
We show that our proposed HFL with model pruning achieves similar learning accuracy compared with the HFL without model pruning and reduces about 50 percent communication cost.
arXiv Detail & Related papers (2023-05-15T22:04:49Z) - User-Centric Federated Learning: Trading off Wireless Resources for
Personalization [18.38078866145659]
In Federated Learning (FL) systems, Statistical Heterogeneousness increases the algorithm convergence time and reduces the generalization performance.
To tackle the above problems without violating the privacy constraints that FL imposes, personalized FL methods have to couple statistically similar clients without directly accessing their data.
In this work, we design user-centric aggregation rules that are based on readily available gradient information and are capable of producing personalized models for each FL client.
Our algorithm outperforms popular personalized FL baselines in terms of average accuracy, worst node performance, and training communication overhead.
arXiv Detail & Related papers (2023-04-25T15:45:37Z) - Adaptive Control of Client Selection and Gradient Compression for
Efficient Federated Learning [28.185096784982544]
Federated learning (FL) allows multiple clients cooperatively train models without disclosing local data.
We propose a heterogeneous-aware FL framework, called FedCG, with adaptive client selection and gradient compression.
Experiments on both real-world prototypes and simulations show that FedCG can provide up to 5.3$times$ speedup compared to other methods.
arXiv Detail & Related papers (2022-12-19T14:19:07Z) - Performance Optimization for Variable Bitwidth Federated Learning in
Wireless Networks [103.22651843174471]
This paper considers improving wireless communication and computation efficiency in federated learning (FL) via model quantization.
In the proposed bitwidth FL scheme, edge devices train and transmit quantized versions of their local FL model parameters to a coordinating server, which aggregates them into a quantized global model and synchronizes the devices.
We show that the FL training process can be described as a Markov decision process and propose a model-based reinforcement learning (RL) method to optimize action selection over iterations.
arXiv Detail & Related papers (2022-09-21T08:52:51Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Boosting Federated Learning in Resource-Constrained Networks [1.7010199949406575]
Federated learning (FL) enables a set of client devices to collaboratively train a model without sharing raw data.
We propose GeL, the guess and learn algorithm.
We show that GeL can boost empirical convergence by up to 40% in resource-constrained networks.
arXiv Detail & Related papers (2021-10-21T21:23:04Z) - Dynamic Attention-based Communication-Efficient Federated Learning [85.18941440826309]
Federated learning (FL) offers a solution to train a global machine learning model.
FL suffers performance degradation when client data distribution is non-IID.
We propose a new adaptive training algorithm $textttAdaFL$ to combat this degradation.
arXiv Detail & Related papers (2021-08-12T14:18:05Z) - Joint Optimization of Communications and Federated Learning Over the Air [32.14738452396869]
Federated learning (FL) is an attractive paradigm for making use of rich distributed data while protecting data privacy.
In this paper, we study joint optimization of communications and FL based on analog aggregation transmission in realistic wireless networks.
arXiv Detail & Related papers (2021-04-08T03:38:31Z) - Delay Minimization for Federated Learning Over Wireless Communication
Networks [172.42768672943365]
The problem of delay computation for federated learning (FL) over wireless communication networks is investigated.
A bisection search algorithm is proposed to obtain the optimal solution.
Simulation results show that the proposed algorithm can reduce delay by up to 27.3% compared to conventional FL methods.
arXiv Detail & Related papers (2020-07-05T19:00:07Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.