When Computing Power Network Meets Distributed Machine Learning: An
Efficient Federated Split Learning Framework
- URL: http://arxiv.org/abs/2305.12979v1
- Date: Mon, 22 May 2023 12:36:52 GMT
- Title: When Computing Power Network Meets Distributed Machine Learning: An
Efficient Federated Split Learning Framework
- Authors: Xinjing Yuan, Lingjun Pu, Lei Jiao, Xiaofei Wang, Meijuan Yang,
Jingdong Xu
- Abstract summary: CPN-FedSL is a Federated Split Learning (FedSL) framework over Computing Power Network (CPN)
We build a dedicated model to capture the basic settings and learning characteristics (e.g., latency, flow, convergence)
- Score: 6.871107511111629
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In this paper, we advocate CPN-FedSL, a novel and flexible Federated Split
Learning (FedSL) framework over Computing Power Network (CPN). We build a
dedicated model to capture the basic settings and learning characteristics
(e.g., training flow, latency and convergence). Based on this model, we
introduce Resource Usage Effectiveness (RUE), a novel performance metric
integrating training utility with system cost, and formulate a multivariate
scheduling problem that maxi?mizes RUE by comprehensively taking client
admission, model partition, server selection, routing and bandwidth allocation
into account (i.e., mixed-integer fractional programming). We design Refinery,
an efficient approach that first linearizes the fractional objective and
non-convex constraints, and then solves the transformed problem via a greedy
based rounding algorithm in multiple iterations. Extensive evaluations
corroborate that CPN-FedSL is superior to the standard and state-of-the-art
learning frameworks (e.g., FedAvg and SplitFed), and besides Refinery is
lightweight and significantly outperforms its variants and de facto heuristic
methods under a variety of settings.
Related papers
- Read-ME: Refactorizing LLMs as Router-Decoupled Mixture of Experts with System Co-Design [59.00758127310582]
We propose a novel framework Read-ME that transforms pre-trained dense LLMs into smaller MoE models.
Our approach employs activation sparsity to extract experts.
Read-ME outperforms other popular open-source dense models of similar scales.
arXiv Detail & Related papers (2024-10-24T19:48:51Z) - Federated Learning with Flexible Architectures [12.800116749927266]
This paper introduces Federated Learning with Flexible Architectures (FedFA), an FL training algorithm that allows clients to train models of different widths and depths.
FedFA incorporates the layer grafting technique to align clients' local architectures with the largest network architecture in the FL system during model aggregation.
arXiv Detail & Related papers (2024-06-14T09:44:46Z) - ESFL: Efficient Split Federated Learning over Resource-Constrained Heterogeneous Wireless Devices [22.664980594996155]
Federated learning (FL) allows multiple parties (distributed devices) to train a machine learning model without sharing raw data.
We propose an efficient split federated learning algorithm (ESFL) to take full advantage of the powerful computing capabilities at a central server.
arXiv Detail & Related papers (2024-02-24T20:50:29Z) - Workflow Optimization for Parallel Split Learning [12.554265727169742]
Split learning (SL) has been proposed as a way to enable resource-constrained devices to train neural networks (NNs) and participate in federated learning (FL)
In parallel SL, multiple helpers can process model parts of one or more clients, thus, considerably reducing the maximum training time over all clients (makespan)
We propose a solution method based on the decomposition of the problem by leveraging its inherent symmetry, and a second one that is fully scalable.
arXiv Detail & Related papers (2024-02-01T14:16:10Z) - Effectively Heterogeneous Federated Learning: A Pairing and Split
Learning Based Approach [16.093068118849246]
This paper presents a novel split federated learning (SFL) framework that pairs clients with different computational resources.
A greedy algorithm is proposed by reconstructing the optimization of training latency as a graph edge selection problem.
Simulation results show the proposed method can significantly improve the FL training speed and achieve high performance.
arXiv Detail & Related papers (2023-08-26T11:10:54Z) - Unifying Synergies between Self-supervised Learning and Dynamic
Computation [53.66628188936682]
We present a novel perspective on the interplay between SSL and DC paradigms.
We show that it is feasible to simultaneously learn a dense and gated sub-network from scratch in a SSL setting.
The co-evolution during pre-training of both dense and gated encoder offers a good accuracy-efficiency trade-off.
arXiv Detail & Related papers (2023-01-22T17:12:58Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Supernet Training for Federated Image Classification under System
Heterogeneity [15.2292571922932]
In this work, we propose a novel framework to consider both scenarios, namely Federation of Supernet Training (FedSup)
It is inspired by how averaging parameters in the model aggregation stage of Federated Learning (FL) is similar to weight-sharing in supernet training.
Under our framework, we present an efficient algorithm (E-FedSup) by sending the sub-model to clients in the broadcast stage for reducing communication costs and training overhead.
arXiv Detail & Related papers (2022-06-03T02:21:01Z) - An Expectation-Maximization Perspective on Federated Learning [75.67515842938299]
Federated learning describes the distributed training of models across multiple clients while keeping the data private on-device.
In this work, we view the server-orchestrated federated learning process as a hierarchical latent variable model where the server provides the parameters of a prior distribution over the client-specific model parameters.
We show that with simple Gaussian priors and a hard version of the well known Expectation-Maximization (EM) algorithm, learning in such a model corresponds to FedAvg, the most popular algorithm for the federated learning setting.
arXiv Detail & Related papers (2021-11-19T12:58:59Z) - Clustered Federated Learning via Generalized Total Variation
Minimization [83.26141667853057]
We study optimization methods to train local (or personalized) models for local datasets with a decentralized network structure.
Our main conceptual contribution is to formulate federated learning as total variation minimization (GTV)
Our main algorithmic contribution is a fully decentralized federated learning algorithm.
arXiv Detail & Related papers (2021-05-26T18:07:19Z) - Edge-assisted Democratized Learning Towards Federated Analytics [67.44078999945722]
We show the hierarchical learning structure of the proposed edge-assisted democratized learning mechanism, namely Edge-DemLearn.
We also validate Edge-DemLearn as a flexible model training mechanism to build a distributed control and aggregation methodology in regions.
arXiv Detail & Related papers (2020-12-01T11:46:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.