FedSL: Federated Split Learning on Distributed Sequential Data in
Recurrent Neural Networks
- URL: http://arxiv.org/abs/2011.03180v2
- Date: Sat, 16 Oct 2021 19:18:00 GMT
- Title: FedSL: Federated Split Learning on Distributed Sequential Data in
Recurrent Neural Networks
- Authors: Ali Abedi and Shehroz S. Khan
- Abstract summary: Federated Learning (FL) and Split Learning (SL) are privacy-preserving Machine-Learning (ML) techniques.
Existing FL and SL approaches work on horizontally or vertically partitioned data.
We propose a novel federated split learning framework, FedSL, to train models on distributed sequential data.
- Score: 4.706263507340607
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) and Split Learning (SL) are privacy-preserving
Machine-Learning (ML) techniques that enable training ML models over data
distributed among clients without requiring direct access to their raw data.
Existing FL and SL approaches work on horizontally or vertically partitioned
data and cannot handle sequentially partitioned data where segments of
multiple-segment sequential data are distributed across clients. In this paper,
we propose a novel federated split learning framework, FedSL, to train models
on distributed sequential data. The most common ML models to train on
sequential data are Recurrent Neural Networks (RNNs). Since the proposed
framework is privacy preserving, segments of multiple-segment sequential data
cannot be shared between clients or between clients and server. To circumvent
this limitation, we propose a novel SL approach tailored for RNNs. A RNN is
split into sub-networks, and each sub-network is trained on one client
containing single segments of multiple-segment training sequences. During local
training, the sub-networks on different clients communicate with each other to
capture latent dependencies between consecutive segments of multiple-segment
sequential data on different clients, but without sharing raw data or complete
model parameters. After training local sub-networks with local sequential data
segments, all clients send their sub-networks to a federated server where
sub-networks are aggregated to generate a global model. The experimental
results on simulated and real-world datasets demonstrate that the proposed
method successfully train models on distributed sequential data, while
preserving privacy, and outperforms previous FL and centralized learning
approaches in terms of achieving higher accuracy in fewer communication rounds.
Related papers
- Federated Clustering: An Unsupervised Cluster-Wise Training for Decentralized Data Distributions [1.6385815610837167]
Federated Cluster-Wise Refinement (FedCRef) involves clients that collaboratively train models on clusters with similar data distributions.
In these groups, clients collaboratively train a shared model representing each data distribution, while continuously refining their local clusters to enhance data association accuracy.
This iterative process allows our system to identify all potential data distributions across the network and develop robust representation models for each.
arXiv Detail & Related papers (2024-08-20T09:05:44Z) - Multi-Level Additive Modeling for Structured Non-IID Federated Learning [54.53672323071204]
We train models organized in a multi-level structure, called Multi-level Additive Models (MAM)'', for better knowledge-sharing across heterogeneous clients.
In federated MAM (FeMAM), each client is assigned to at most one model per level and its personalized prediction sums up the outputs of models assigned to it across all levels.
Experiments show that FeMAM surpasses existing clustered FL and personalized FL methods in various non-IID settings.
arXiv Detail & Related papers (2024-05-26T07:54:53Z) - FedSampling: A Better Sampling Strategy for Federated Learning [81.85411484302952]
Federated learning (FL) is an important technique for learning models from decentralized data in a privacy-preserving way.
Existing FL methods usually uniformly sample clients for local model learning in each round.
We propose a novel data uniform sampling strategy for federated learning (FedSampling)
arXiv Detail & Related papers (2023-06-25T13:38:51Z) - Subspace based Federated Unlearning [75.90552823500633]
Federated unlearning (FL) aims to remove a specified target client's contribution in FL to satisfy the user's right to be forgotten.
Most existing federated unlearning algorithms require the server to store the history of the parameter updates.
We propose a simple-yet-effective subspace based federated unlearning method, dubbed SFU, that lets the global model perform gradient ascent.
arXiv Detail & Related papers (2023-02-24T04:29:44Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Optimizing Server-side Aggregation For Robust Federated Learning via
Subspace Training [80.03567604524268]
Non-IID data distribution across clients and poisoning attacks are two main challenges in real-world federated learning systems.
We propose SmartFL, a generic approach that optimize the server-side aggregation process.
We provide theoretical analyses of the convergence and generalization capacity for SmartFL.
arXiv Detail & Related papers (2022-11-10T13:20:56Z) - Efficient Distribution Similarity Identification in Clustered Federated
Learning via Principal Angles Between Client Data Subspaces [59.33965805898736]
Clustered learning has been shown to produce promising results by grouping clients into clusters.
Existing FL algorithms are essentially trying to group clients together with similar distributions.
Prior FL algorithms attempt similarities indirectly during training.
arXiv Detail & Related papers (2022-09-21T17:37:54Z) - FLIS: Clustered Federated Learning via Inference Similarity for Non-IID
Data Distribution [7.924081556869144]
We present a new algorithm, FLIS, which groups the clients population in clusters with jointly trainable data distributions.
We present experimental results to demonstrate the benefits of FLIS over the state-of-the-art benchmarks on CIFAR-100/10, SVHN, and FMNIST datasets.
arXiv Detail & Related papers (2022-08-20T22:10:48Z) - LSTMSPLIT: Effective SPLIT Learning based LSTM on Sequential Time-Series
Data [3.9011223632827385]
We propose a new approach, LSTMSPLIT, that uses SL architecture with an LSTM network to classify time-series data with multiple clients.
The proposed method, LSTMSPLIT, has achieved better or reasonable accuracy compared to the Split-1DCNN method using the electrocardiogram dataset and the human activity recognition dataset.
arXiv Detail & Related papers (2022-03-08T11:44:12Z) - Data Selection for Efficient Model Update in Federated Learning [0.07614628596146598]
We propose to reduce the amount of local data that is needed to train a global model.
We do this by splitting the model into a lower part for generic feature extraction and an upper part that is more sensitive to the characteristics of the local data.
Our experiments show that less than 1% of the local data can transfer the characteristics of the client data to the global model.
arXiv Detail & Related papers (2021-11-05T14:07:06Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.