Cluster-Based Client Selection for Dependent Multi-Task Federated Learning in Edge Computing
- URL: http://arxiv.org/abs/2510.13132v1
- Date: Wed, 15 Oct 2025 04:14:12 GMT
- Title: Cluster-Based Client Selection for Dependent Multi-Task Federated Learning in Edge Computing
- Authors: Jieping Luo, Qiyue Li, Zhizhang Liu, Hang Qi, Jiaying Yin, Jingjin Wu,
- Abstract summary: We study the client selection problem in Federated Learning (FL) within mobile edge computing (MEC) environments.<n>We propose CoDa-FL, a Cluster-oriented and Dependency-aware framework designed to reduce the total required time.
- Score: 3.543400452178611
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: We study the client selection problem in Federated Learning (FL) within mobile edge computing (MEC) environments, particularly under the dependent multi-task settings, to reduce the total time required to complete various learning tasks. We propose CoDa-FL, a Cluster-oriented and Dependency-aware framework designed to reduce the total required time via cluster-based client selection and dependent task assignment. Our approach considers Earth Mover's Distance (EMD) for client clustering based on their local data distributions to lower computational cost and improve communication efficiency. We derive a direct and explicit relationship between intra-cluster EMD and the number of training rounds required for convergence, thereby simplifying the otherwise complex process of obtaining the optimal solution. Additionally, we incorporate a directed acyclic graph-based task scheduling mechanism to effectively manage task dependencies. Through numerical experiments, we validate that our proposed CoDa-FL outperforms existing benchmarks by achieving faster convergence, lower communication and computational costs, and higher learning accuracy under heterogeneous MEC settings.
Related papers
- Optimizing Communication and Device Clustering for Clustered Federated Learning with Differential Privacy [28.120922916868683]
We propose a secure and communication-efficient clustered federated learning (CFL) design.<n>In our model, several base stations (BSs) with heterogeneous task-handling capabilities and multiple users with non-independent and identically distributed (non-IID) data jointly perform CFL training.<n>We propose a novel dynamic penalty function assisted value multi-agent reinforcement learning (DPVD-MARL) algorithm that enables distributed BSs to independently determine their connected users.
arXiv Detail & Related papers (2025-07-09T22:44:26Z) - TurboSVM-FL: Boosting Federated Learning through SVM Aggregation for Lazy Clients [40.687124234279274]
TurboSVM-FL is a novel federated aggregation strategy that poses no additional computation burden on the client side.<n>We evaluate TurboSVM-FL on multiple datasets including FEMNIST, CelebA, and Shakespeare.
arXiv Detail & Related papers (2024-01-22T14:59:11Z) - FedLALR: Client-Specific Adaptive Learning Rates Achieve Linear Speedup
for Non-IID Data [54.81695390763957]
Federated learning is an emerging distributed machine learning method.
We propose a heterogeneous local variant of AMSGrad, named FedLALR, in which each client adjusts its learning rate.
We show that our client-specified auto-tuned learning rate scheduling can converge and achieve linear speedup with respect to the number of clients.
arXiv Detail & Related papers (2023-09-18T12:35:05Z) - A Multi-Head Ensemble Multi-Task Learning Approach for Dynamical
Computation Offloading [62.34538208323411]
We propose a multi-head ensemble multi-task learning (MEMTL) approach with a shared backbone and multiple prediction heads (PHs)
MEMTL outperforms benchmark methods in both the inference accuracy and mean square error without requiring additional training data.
arXiv Detail & Related papers (2023-09-02T11:01:16Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Time-sensitive Learning for Heterogeneous Federated Edge Intelligence [52.83633954857744]
We investigate real-time machine learning in a federated edge intelligence (FEI) system.
FEI systems exhibit heterogenous communication and computational resource distribution.
We propose a time-sensitive federated learning (TS-FL) framework to minimize the overall run-time for collaboratively training a shared ML model.
arXiv Detail & Related papers (2023-01-26T08:13:22Z) - Clustered Scheduling and Communication Pipelining For Efficient Resource
Management Of Wireless Federated Learning [6.753282396352072]
This paper proposes using communication pipelining to enhance the wireless spectrum utilization efficiency and convergence speed of federated learning.
We provide a generic formulation for optimal client clustering under different settings, and we analytically derive an efficient algorithm for obtaining the optimal solution.
arXiv Detail & Related papers (2022-06-15T16:23:19Z) - Energy-Efficient Multi-Orchestrator Mobile Edge Learning [54.28419430315478]
Mobile Edge Learning (MEL) is a collaborative learning paradigm that features distributed training of Machine Learning (ML) models over edge devices.
In MEL, possible coexistence of multiple learning tasks with different datasets may arise.
We propose lightweight algorithms that can achieve near-optimal performance and facilitate the trade-offs between energy consumption, accuracy, and solution complexity.
arXiv Detail & Related papers (2021-09-02T07:37:10Z) - Straggler-Resilient Federated Learning: Leveraging the Interplay Between
Statistical Accuracy and System Heterogeneity [57.275753974812666]
Federated learning involves learning from data samples distributed across a network of clients while the data remains local.
In this paper, we propose a novel straggler-resilient federated learning method that incorporates statistical characteristics of the clients' data to adaptively select the clients in order to speed up the learning procedure.
arXiv Detail & Related papers (2020-12-28T19:21:14Z) - Toward Multiple Federated Learning Services Resource Sharing in Mobile
Edge Networks [88.15736037284408]
We study a new model of multiple federated learning services at the multi-access edge computing server.
We propose a joint resource optimization and hyper-learning rate control problem, namely MS-FEDL.
Our simulation results demonstrate the convergence performance of our proposed algorithms.
arXiv Detail & Related papers (2020-11-25T01:29:41Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.