Optimizing Split Points for Error-Resilient SplitFed Learning
- URL: http://arxiv.org/abs/2405.19453v1
- Date: Wed, 29 May 2024 19:03:27 GMT
- Title: Optimizing Split Points for Error-Resilient SplitFed Learning
- Authors: Chamani Shiranthika, Parvaneh Saeedi, Ivan V. Bajić,
- Abstract summary: SplitFed aims to minimize the computational burden on individual clients in FL and parallelize SL while maintaining privacy.
This study investigates the resilience of SplitFed to packet loss at model split points.
- Score: 2.2530496464901106
- License: http://creativecommons.org/publicdomain/zero/1.0/
- Abstract: Recent advancements in decentralized learning, such as Federated Learning (FL), Split Learning (SL), and Split Federated Learning (SplitFed), have expanded the potentials of machine learning. SplitFed aims to minimize the computational burden on individual clients in FL and parallelize SL while maintaining privacy. This study investigates the resilience of SplitFed to packet loss at model split points. It explores various parameter aggregation strategies of SplitFed by examining the impact of splitting the model at different points-either shallow split or deep split-on the final global model performance. The experiments, conducted on a human embryo image segmentation task, reveal a statistically significant advantage of a deeper split point.
Related papers
- Scalable Federated Unlearning via Isolated and Coded Sharding [76.12847512410767]
Federated unlearning has emerged as a promising paradigm to erase the client-level data effect.
This paper proposes a scalable federated unlearning framework based on isolated sharding and coded computing.
arXiv Detail & Related papers (2024-01-29T08:41:45Z) - Relaxed Contrastive Learning for Federated Learning [48.96253206661268]
We propose a novel contrastive learning framework to address the challenges of data heterogeneity in federated learning.
Our framework outperforms all existing federated learning approaches by huge margins on the standard benchmarks.
arXiv Detail & Related papers (2024-01-10T04:55:24Z) - SplitFed resilience to packet loss: Where to split, that is the question [27.29876880765472]
Split Federated Learning (SFL) aims to reduce the computational power required by each client in FL and parallelize SL while maintaining privacy.
This paper investigates the robustness of SFL against packet loss on communication links.
Experiments are carried out on a segmentation model for human embryo images and indicate the statistically significant advantage of a deeper split point.
arXiv Detail & Related papers (2023-07-25T22:54:47Z) - Reducing Communication for Split Learning by Randomized Top-k
Sparsification [25.012786154486164]
Split learning is a simple solution for Vertical Federated Learning (VFL)
We investigate multiple communication reduction methods for split learning, including cut layer size reduction, top-k sparsification, quantization, and L1 regularization.
Our proposed randomized top-k sparsification achieves a better model performance under the same compression level.
arXiv Detail & Related papers (2023-05-29T09:02:05Z) - Predictive GAN-powered Multi-Objective Optimization for Hybrid Federated
Split Learning [56.125720497163684]
We propose a hybrid federated split learning framework in wireless networks.
We design a parallel computing scheme for model splitting without label sharing, and theoretically analyze the influence of the delayed gradient caused by the scheme on the convergence speed.
arXiv Detail & Related papers (2022-09-02T10:29:56Z) - Pairwise Learning via Stagewise Training in Proximal Setting [0.0]
We combine adaptive sample size and importance sampling techniques for pairwise learning, with convergence guarantees for nonsmooth convex pairwise loss functions.
We demonstrate that sampling opposite instances at each reduces the variance of the gradient, hence accelerating convergence.
arXiv Detail & Related papers (2022-08-08T11:51:01Z) - MS Lesion Segmentation: Revisiting Weighting Mechanisms for Federated
Learning [92.91544082745196]
Federated learning (FL) has been widely employed for medical image analysis.
FL's performance is limited for multiple sclerosis (MS) lesion segmentation tasks.
We propose the first FL MS lesion segmentation framework via two effective re-weighting mechanisms.
arXiv Detail & Related papers (2022-05-03T14:06:03Z) - Server-Side Local Gradient Averaging and Learning Rate Acceleration for
Scalable Split Learning [82.06357027523262]
Federated learning (FL) and split learning (SL) are two spearheads possessing their pros and cons, and are suited for many user clients and large models.
In this work, we first identify the fundamental bottlenecks of SL, and thereby propose a scalable SL framework, coined SGLR.
arXiv Detail & Related papers (2021-12-11T08:33:25Z) - AdaSplit: Adaptive Trade-offs for Resource-constrained Distributed Deep
Learning [18.3841463794885]
Split learning (SL) reduces client compute load by splitting the model training between client and server.
AdaSplit enables efficiently scaling SL to low resource scenarios by reducing bandwidth consumption and improving performance across heterogeneous clients.
arXiv Detail & Related papers (2021-12-02T23:33:15Z) - Splitfed learning without client-side synchronization: Analyzing
client-side split network portion size to overall performance [4.689140226545214]
Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are three recent developments in distributed machine learning.
This paper studies SFL without client-side model synchronization.
It provides only 1%-2% better accuracy than Multi-head Split Learning on the MNIST test set.
arXiv Detail & Related papers (2021-09-19T22:57:23Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.