Context-Aware Online Client Selection for Hierarchical Federated
Learning
- URL: http://arxiv.org/abs/2112.00925v2
- Date: Fri, 3 Dec 2021 16:15:21 GMT
- Title: Context-Aware Online Client Selection for Hierarchical Federated
Learning
- Authors: Zhe Qu, Rui Duan, Lixing Chen, Jie Xu, Zhuo Lu and Yao Liu
- Abstract summary: Federated Learning (FL) has been considered as an appealing framework to tackle data privacy issues.
Federated Learning (FL) has been considered as an appealing framework to tackle data privacy issues.
- Score: 33.205640790962505
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) has been considered as an appealing framework to
tackle data privacy issues of mobile devices compared to conventional Machine
Learning (ML). Using Edge Servers (ESs) as intermediaries to perform model
aggregation in proximity can reduce the transmission overhead, and it enables
great potentials in low-latency FL, where the hierarchical architecture of FL
(HFL) has been attracted more attention. Designing a proper client selection
policy can significantly improve training performance, and it has been
extensively used in FL studies. However, to the best of our knowledge, there
are no studies focusing on HFL. In addition, client selection for HFL faces
more challenges than conventional FL, e.g., the time-varying connection of
client-ES pairs and the limited budget of the Network Operator (NO). In this
paper, we investigate a client selection problem for HFL, where the NO learns
the number of successful participating clients to improve the training
performance (i.e., select as many clients in each round) as well as under the
limited budget on each ES. An online policy, called Context-aware Online Client
Selection (COCS), is developed based on Contextual Combinatorial Multi-Armed
Bandit (CC-MAB). COCS observes the side-information (context) of local
computing and transmission of client-ES pairs and makes client selection
decisions to maximize NO's utility given a limited budget. Theoretically, COCS
achieves a sublinear regret compared to an Oracle policy on both strongly
convex and non-convex HFL. Simulation results also support the efficiency of
the proposed COCS policy on real-world datasets.
Related papers
- Novel clustered federated learning based on local loss [14.380553970274242]
This paper proposes LCFL, a novel metric for evaluating data distributions in learning.
It aligns with learning requirements, accurately addresses privacy concerns, and provides more accurate classification.
arXiv Detail & Related papers (2024-07-12T15:37:05Z) - HierSFL: Local Differential Privacy-aided Split Federated Learning in
Mobile Edge Computing [7.180235086275924]
Federated Learning is a promising approach for learning from user data while preserving data privacy.
Split Federated Learning is utilized, where clients upload their intermediate model training outcomes to a cloud server for collaborative server-client model training.
This methodology facilitates resource-constrained clients' participation in model training but also increases the training time and communication overhead.
We propose a novel algorithm, called Hierarchical Split Federated Learning (HierSFL), that amalgamates models at the edge and cloud phases.
arXiv Detail & Related papers (2024-01-16T09:34:10Z) - Client Orchestration and Cost-Efficient Joint Optimization for
NOMA-Enabled Hierarchical Federated Learning [55.49099125128281]
We propose a non-orthogonal multiple access (NOMA) enabled HFL system under semi-synchronous cloud model aggregation.
We show that the proposed scheme outperforms the considered benchmarks regarding HFL performance improvement and total cost reduction.
arXiv Detail & Related papers (2023-11-03T13:34:44Z) - Collaborating Heterogeneous Natural Language Processing Tasks via
Federated Learning [55.99444047920231]
The proposed ATC framework achieves significant improvements compared with various baseline methods.
We conduct extensive experiments on six widely-used datasets covering both Natural Language Understanding (NLU) and Natural Language Generation (NLG) tasks.
arXiv Detail & Related papers (2022-12-12T09:27:50Z) - FL Games: A Federated Learning Framework for Distribution Shifts [71.98708418753786]
Federated learning aims to train predictive models for data that is distributed across clients, under the orchestration of a server.
We propose FL GAMES, a game-theoretic framework for federated learning that learns causal features that are invariant across clients.
arXiv Detail & Related papers (2022-10-31T22:59:03Z) - Improving Privacy-Preserving Vertical Federated Learning by Efficient Communication with ADMM [62.62684911017472]
Federated learning (FL) enables devices to jointly train shared models while keeping the training data local for privacy purposes.
We introduce a VFL framework with multiple heads (VIM), which takes the separate contribution of each client into account.
VIM achieves significantly higher performance and faster convergence compared with the state-of-the-art.
arXiv Detail & Related papers (2022-07-20T23:14:33Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Budgeted Online Selection of Candidate IoT Clients to Participate in
Federated Learning [33.742677763076]
Federated Learning (FL) is an architecture in which model parameters are exchanged instead of client data.
FL trains a global model by communicating with clients over communication rounds.
We propose an online stateful FL to find the best candidate clients and an IoT client alarm application.
arXiv Detail & Related papers (2020-11-16T06:32:31Z) - Hybrid Federated and Centralized Learning [25.592568132720157]
Federated learning (FL) allows the clients to send only the model updates to the PS instead of the whole dataset.
In this way, FL brings the learning to edge level, wherein powerful computational resources are required on the client side.
We address this through a novel hybrid federated and centralized learning (HFCL) framework to effectively train a learning model.
arXiv Detail & Related papers (2020-11-13T13:11:04Z) - Multi-Armed Bandit Based Client Scheduling for Federated Learning [91.91224642616882]
federated learning (FL) features ubiquitous properties such as reduction of communication overhead and preserving data privacy.
In each communication round of FL, the clients update local models based on their own data and upload their local updates via wireless channels.
This work provides a multi-armed bandit-based framework for online client scheduling (CS) in FL without knowing wireless channel state information and statistical characteristics of clients.
arXiv Detail & Related papers (2020-07-05T12:32:32Z) - Client Selection and Bandwidth Allocation in Wireless Federated Learning
Networks: A Long-Term Perspective [8.325089307976654]
This paper studies federated learning (FL) in a classic wireless network, where learning clients share a common wireless link to a coordinating server to perform federated model training using their local data.
In such wireless federated learning networks (WFLNs), optimizing the learning performance crucially depends on how clients are selected and how bandwidth is allocated among the selected clients in every learning round, as both radio and client energy resources are limited.
arXiv Detail & Related papers (2020-04-09T01:06:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.