Context-Aware Online Client Selection for Hierarchical Federated
Learning
- URL: http://arxiv.org/abs/2112.00925v2
- Date: Fri, 3 Dec 2021 16:15:21 GMT
- Title: Context-Aware Online Client Selection for Hierarchical Federated
Learning
- Authors: Zhe Qu, Rui Duan, Lixing Chen, Jie Xu, Zhuo Lu and Yao Liu
- Abstract summary: Federated Learning (FL) has been considered as an appealing framework to tackle data privacy issues.
Federated Learning (FL) has been considered as an appealing framework to tackle data privacy issues.
- Score: 33.205640790962505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) has been considered as an appealing framework to
tackle data privacy issues of mobile devices compared to conventional Machine
Learning (ML). Using Edge Servers (ESs) as intermediaries to perform model
aggregation in proximity can reduce the transmission overhead, and it enables
great potentials in low-latency FL, where the hierarchical architecture of FL
(HFL) has been attracted more attention. Designing a proper client selection
policy can significantly improve training performance, and it has been
extensively used in FL studies. However, to the best of our knowledge, there
are no studies focusing on HFL. In addition, client selection for HFL faces
more challenges than conventional FL, e.g., the time-varying connection of
client-ES pairs and the limited budget of the Network Operator (NO). In this
paper, we investigate a client selection problem for HFL, where the NO learns
the number of successful participating clients to improve the training
performance (i.e., select as many clients in each round) as well as under the
limited budget on each ES. An online policy, called Context-aware Online Client
Selection (COCS), is developed based on Contextual Combinatorial Multi-Armed
Bandit (CC-MAB). COCS observes the side-information (context) of local
computing and transmission of client-ES pairs and makes client selection
decisions to maximize NO's utility given a limited budget. Theoretically, COCS
achieves a sublinear regret compared to an Oracle policy on both strongly
convex and non-convex HFL. Simulation results also support the efficiency of
the proposed COCS policy on real-world datasets.
Related papers
- Client-Centric Federated Adaptive Optimization [78.30827455292827]
Federated Learning (FL) is a distributed learning paradigm where clients collaboratively train a model while keeping their own data private.
We propose Federated-Centric Adaptive Optimization, which is a class of novel federated optimization approaches.
arXiv Detail & Related papers (2025-01-17T04:00:50Z) - How Can Incentives and Cut Layer Selection Influence Data Contribution in Split Federated Learning? [49.16923922018379]
Split Federated Learning (SFL) has emerged as a promising approach by combining the advantages of federated and split learning.
We model the problem using a hierarchical decision-making approach, formulated as a single-leader multi-follower Stackelberg game.
Our findings show that the Stackelberg equilibrium solution maximizes the utility for both the clients and the SFL model owner.
arXiv Detail & Related papers (2024-12-10T06:24:08Z) - Adaptive Client Selection with Personalization for Communication Efficient Federated Learning [2.8484833657472644]
Federated Learning (FL) is a distributed approach to collaboratively training machine learning models.
This article introduces ACSP-FL, a solution to reduce the overall communication and computation costs for training a model in FL environments.
arXiv Detail & Related papers (2024-11-26T19:20:59Z) - HierSFL: Local Differential Privacy-aided Split Federated Learning in
Mobile Edge Computing [7.180235086275924]
Federated Learning is a promising approach for learning from user data while preserving data privacy.
Split Federated Learning is utilized, where clients upload their intermediate model training outcomes to a cloud server for collaborative server-client model training.
This methodology facilitates resource-constrained clients' participation in model training but also increases the training time and communication overhead.
We propose a novel algorithm, called Hierarchical Split Federated Learning (HierSFL), that amalgamates models at the edge and cloud phases.
arXiv Detail & Related papers (2024-01-16T09:34:10Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Budgeted Online Selection of Candidate IoT Clients to Participate in
Federated Learning [33.742677763076]
Federated Learning (FL) is an architecture in which model parameters are exchanged instead of client data.
FL trains a global model by communicating with clients over communication rounds.
We propose an online stateful FL to find the best candidate clients and an IoT client alarm application.
arXiv Detail & Related papers (2020-11-16T06:32:31Z) - Hybrid Federated and Centralized Learning [25.592568132720157]
Federated learning (FL) allows the clients to send only the model updates to the PS instead of the whole dataset.
In this way, FL brings the learning to edge level, wherein powerful computational resources are required on the client side.
We address this through a novel hybrid federated and centralized learning (HFCL) framework to effectively train a learning model.
arXiv Detail & Related papers (2020-11-13T13:11:04Z) - Multi-Armed Bandit Based Client Scheduling for Federated Learning [91.91224642616882]
federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy.
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
This work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients.
arXiv Detail & Related papers (2020-07-05T12:32:32Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.