Advancements of federated learning towards privacy preservation: from
federated learning to split learning
- URL: http://arxiv.org/abs/2011.14818v1
- Date: Wed, 25 Nov 2020 05:01:33 GMT
- Title: Advancements of federated learning towards privacy preservation: from
federated learning to split learning
- Authors: Chandra Thapa and M.A.P. Chamikara and Seyit A. Camtepe
- Abstract summary: In distributed collaborative machine learning (DCML) paradigm, federated learning (FL) recently attracted much attention due to its applications in health, finance, and the latest innovations such as industry 4.0 and smart vehicles.
In practical scenarios, all clients do not have sufficient computing resources (e.g., Internet of Things), the machine learning model has millions of parameters, and its privacy between the server and the clients is a prime concern.
Recently, a hybrid of FL and SL, called splitfed learning, is introduced to elevate the benefits of both FL (faster training/testing time) and SL (model split and
- Score: 1.3700362496838854
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: In the distributed collaborative machine learning (DCML) paradigm, federated
learning (FL) recently attracted much attention due to its applications in
health, finance, and the latest innovations such as industry 4.0 and smart
vehicles. FL provides privacy-by-design. It trains a machine learning model
collaboratively over several distributed clients (ranging from two to millions)
such as mobile phones, without sharing their raw data with any other
participant. In practical scenarios, all clients do not have sufficient
computing resources (e.g., Internet of Things), the machine learning model has
millions of parameters, and its privacy between the server and the clients
while training/testing is a prime concern (e.g., rival parties). In this
regard, FL is not sufficient, so split learning (SL) is introduced. SL is
reliable in these scenarios as it splits a model into multiple portions,
distributes them among clients and server, and trains/tests their respective
model portions to accomplish the full model training/testing. In SL, the
participants do not share both data and their model portions to any other
parties, and usually, a smaller network portion is assigned to the clients
where data resides. Recently, a hybrid of FL and SL, called splitfed learning,
is introduced to elevate the benefits of both FL (faster training/testing time)
and SL (model split and training). Following the developments from FL to SL,
and considering the importance of SL, this chapter is designed to provide
extensive coverage in SL and its variants. The coverage includes fundamentals,
existing findings, integration with privacy measures such as differential
privacy, open problems, and code implementation.
Related papers
- A Survey on Efficient Federated Learning Methods for Foundation Model Training [62.473245910234304]
Federated Learning (FL) has become an established technique to facilitate privacy-preserving collaborative training across a multitude of clients.
In the wake of Foundation Models (FM), the reality is different for many deep learning applications.
We discuss the benefits and drawbacks of parameter-efficient fine-tuning (PEFT) for FL applications.
arXiv Detail & Related papers (2024-01-09T10:22:23Z) - Tunable Soft Prompts are Messengers in Federated Learning [55.924749085481544]
Federated learning (FL) enables multiple participants to collaboratively train machine learning models using decentralized data sources.
The lack of model privacy protection in FL becomes an unneglectable challenge.
We propose a novel FL training approach that accomplishes information exchange among participants via tunable soft prompts.
arXiv Detail & Related papers (2023-11-12T11:01:10Z) - PFSL: Personalized & Fair Split Learning with Data & Label Privacy for
thin clients [0.5144809478361603]
PFSL is a new framework of distributed split learning where a large number of thin clients perform transfer learning in parallel.
We implement a lightweight step of personalization of client models to provide high performance for their respective data distributions.
Our accuracy far exceeds that of current algorithms SL and is very close to that of centralized learning on several real-life benchmarks.
arXiv Detail & Related papers (2023-03-19T10:38:29Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Server-Side Local Gradient Averaging and Learning Rate Acceleration for
Scalable Split Learning [82.06357027523262]
Federated learning (FL) and split learning (SL) are two spearheads possessing their pros and cons, and are suited for many user clients and large models.
In this work, we first identify the fundamental bottlenecks of SL, and thereby propose a scalable SL framework, coined SGLR.
arXiv Detail & Related papers (2021-12-11T08:33:25Z) - Splitfed learning without client-side synchronization: Analyzing
client-side split network portion size to overall performance [4.689140226545214]
Federated Learning (FL), Split Learning (SL), and SplitFed Learning (SFL) are three recent developments in distributed machine learning.
This paper studies SFL without client-side model synchronization.
It provides only 1%-2% better accuracy than Multi-head Split Learning on the MNIST test set.
arXiv Detail & Related papers (2021-09-19T22:57:23Z) - IPLS : A Framework for Decentralized Federated Learning [6.6271520914941435]
We introduce IPLS, a fully decentralized federated learning framework that is partially based on the interplanetary file system (IPFS)
IPLS scales with the number of participants, is robust against intermittent connectivity and dynamic participant departures/arrivals, requires minimal resources, and guarantees that the accuracy of the trained model quickly converges to that of a centralized FL framework with an accuracy drop of less than one per thousand.
arXiv Detail & Related papers (2021-01-06T07:44:51Z) - Federated Mutual Learning [65.46254760557073]
Federated Mutual Leaning (FML) allows clients training a generalized model collaboratively and a personalized model independently.
The experiments show that FML can achieve better performance than alternatives in typical Federated learning setting.
arXiv Detail & Related papers (2020-06-27T09:35:03Z) - SplitFed: When Federated Learning Meets Split Learning [16.212941272007285]
Federated learning (FL) and split learning (SL) are two popular distributed machine learning approaches.
This paper presents a novel approach, named splitfed learning (SFL), that amalgamates the two approaches.
SFL provides similar test accuracy and communication efficiency as SL while significantly decreasing its computation time per global epoch than in SL for multiple clients.
arXiv Detail & Related papers (2020-04-25T08:52:50Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.