Learner Referral for Cost-Effective Federated Learning Over Hierarchical
IoT Networks
- URL: http://arxiv.org/abs/2307.09977v1
- Date: Wed, 19 Jul 2023 13:33:43 GMT
- Title: Learner Referral for Cost-Effective Federated Learning Over Hierarchical
IoT Networks
- Authors: Yulan Gao, Ziqiang Ye, Yue Xiao, and Wei Xiang
- Abstract summary: This paper aided federated selection (LRef-FedCS), communications resource, and local model accuracy (LMAO) methods.
Our proposed LRef-FedCS approach could achieve a good balance between high global accuracy and reducing cost.
- Score: 21.76836812021954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The paradigm of federated learning (FL) to address data privacy concerns by
locally training parameters on resource-constrained clients in a distributed
manner has garnered significant attention. Nonetheless, FL is not applicable
when not all clients within the coverage of the FL server are registered with
the FL network. To bridge this gap, this paper proposes joint learner referral
aided federated client selection (LRef-FedCS), along with communications and
computing resource scheduling, and local model accuracy optimization (LMAO)
methods. These methods are designed to minimize the cost incurred by the
worst-case participant and ensure the long-term fairness of FL in hierarchical
Internet of Things (HieIoT) networks. Utilizing the Lyapunov optimization
technique, we reformulate the original problem into a stepwise joint
optimization problem (JOP). Subsequently, to tackle the mixed-integer
non-convex JOP, we separatively and iteratively address LRef-FedCS and LMAO
through the centralized method and self-adaptive global best harmony search
(SGHS) algorithm, respectively. To enhance scalability, we further propose a
distributed LRef-FedCS approach based on a matching game to replace the
centralized method described above. Numerical simulations and experimental
results on the MNIST/CIFAR-10 datasets demonstrate that our proposed LRef-FedCS
approach could achieve a good balance between pursuing high global accuracy and
reducing cost.
Related papers
- Online Client Scheduling and Resource Allocation for Efficient Federated Edge Learning [9.451084740123198]
Federated learning (FL) enables edge devices to collaboratively train a machine learning model without sharing their raw data.
However, deploying FL over mobile edge networks with constrained resources such as power, bandwidth, and suffers from high training latency and low model accuracy.
This paper investigates the optimal client scheduling and resource allocation for FL over mobile edge networks under resource constraints and uncertainty.
arXiv Detail & Related papers (2024-09-29T01:56:45Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - Asymmetrically Decentralized Federated Learning [22.21977974314497]
Decentralized Federated Learning (DFL) has emerged, which discards the server with a peer-to-peer (P2P) communication framework.
This paper proposes DFedSGPSM algorithm, which is based on asymmetric topologies and utilizes the Push- Aware protocol.
arXiv Detail & Related papers (2023-10-08T09:46:26Z) - Vertical Federated Learning over Cloud-RAN: Convergence Analysis and
System Optimization [82.12796238714589]
We propose a novel cloud radio access network (Cloud-RAN) based vertical FL system to enable fast and accurate model aggregation.
We characterize the convergence behavior of the vertical FL algorithm considering both uplink and downlink transmissions.
We establish a system optimization framework by joint transceiver and fronthaul quantization design, for which successive convex approximation and alternate convex search based system optimization algorithms are developed.
arXiv Detail & Related papers (2023-05-04T09:26:03Z) - Automated Federated Learning in Mobile Edge Networks -- Fast Adaptation
and Convergence [83.58839320635956]
Federated Learning (FL) can be used in mobile edge networks to train machine learning models in a distributed manner.
Recent FL has been interpreted within a Model-Agnostic Meta-Learning (MAML) framework, which brings FL significant advantages in fast adaptation and convergence over heterogeneous datasets.
This paper addresses how much benefit MAML brings to FL and how to maximize such benefit over mobile edge networks.
arXiv Detail & Related papers (2023-03-23T02:42:10Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Disentangled Federated Learning for Tackling Attributes Skew via
Invariant Aggregation and Diversity Transferring [104.19414150171472]
Attributes skews the current federated learning (FL) frameworks from consistent optimization directions among the clients.
We propose disentangled federated learning (DFL) to disentangle the domain-specific and cross-invariant attributes into two complementary branches.
Experiments verify that DFL facilitates FL with higher performance, better interpretability, and faster convergence rate, compared with SOTA FL methods.
arXiv Detail & Related papers (2022-06-14T13:12:12Z) - Generalized Federated Learning via Sharpness Aware Minimization [22.294290071999736]
We propose a general, effective algorithm, textttFedSAM, based on Sharpness Aware Minimization (SAM) local, and develop a momentum FL algorithm to bridge local and global models.
Empirically, our proposed algorithms substantially outperform existing FL studies and significantly decrease the learning deviation.
arXiv Detail & Related papers (2022-06-06T13:54:41Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - Communication-Efficient Hierarchical Federated Learning for IoT
Heterogeneous Systems with Imbalanced Data [42.26599494940002]
Federated learning (FL) is a distributed learning methodology that allows multiple nodes to cooperatively train a deep learning model.
This paper studies the potential of hierarchical FL in IoT heterogeneous systems.
It proposes an optimized solution for user assignment and resource allocation on multiple edge nodes.
arXiv Detail & Related papers (2021-07-14T08:32:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.