Federated Split GANs
- URL: http://arxiv.org/abs/2207.01750v1
- Date: Mon, 4 Jul 2022 23:53:47 GMT
- Title: Federated Split GANs
- Authors: Pranvera Korto\c{c}i, Yilei Liang, Pengyuan Zhou, Lik-Hang Lee, Abbas
Mehrabi, Pan Hui, Sasu Tarkoma, Jon Crowcroft
- Abstract summary: We propose an alternative approach to train ML models in user's devices themselves.
We focus on GANs (generative adversarial networks) and leverage their inherent privacy-preserving attribute.
Our system preserves data privacy, keeps a short training time, and yields same accuracy of model training in unconstrained devices.
- Score: 12.007429155505767
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Mobile devices and the immense amount and variety of data they generate are
key enablers of machine learning (ML)-based applications. Traditional ML
techniques have shifted toward new paradigms such as federated (FL) and split
learning (SL) to improve the protection of user's data privacy. However, these
paradigms often rely on server(s) located in the edge or cloud to train
computationally-heavy parts of a ML model to avoid draining the limited
resource on client devices, resulting in exposing device data to such third
parties. This work proposes an alternative approach to train
computationally-heavy ML models in user's devices themselves, where
corresponding device data resides. Specifically, we focus on GANs (generative
adversarial networks) and leverage their inherent privacy-preserving attribute.
We train the discriminative part of a GAN with raw data on user's devices,
whereas the generative model is trained remotely (e.g., server) for which there
is no need to access sensor true data. Moreover, our approach ensures that the
computational load of training the discriminative model is shared among user's
devices-proportional to their computation capabilities-by means of SL. We
implement our proposed collaborative training scheme of a computationally-heavy
GAN model in real resource-constrained devices. The results show that our
system preserves data privacy, keeps a short training time, and yields same
accuracy of model training in unconstrained devices (e.g., cloud). Our code can
be found on https://github.com/YukariSonz/FSL-GAN
Related papers
- Federated Learning for Misbehaviour Detection with Variational Autoencoders and Gaussian Mixture Models [0.2999888908665658]
Federated Learning (FL) has become an attractive approach to collaboratively train Machine Learning (ML) models.
This work proposes a novel unsupervised FL approach for the identification of potential misbehavior in vehicular environments.
We leverage the computing capabilities of public cloud services for model aggregation purposes.
arXiv Detail & Related papers (2024-05-16T08:49:50Z) - CRSFL: Cluster-based Resource-aware Split Federated Learning for Continuous Authentication [5.636155173401658]
Split Learning (SL) and Federated Learning (FL) have emerged as promising technologies for training a decentralized Machine Learning (ML) model.
We propose combining these technologies to address the continuous authentication challenge while protecting user privacy.
arXiv Detail & Related papers (2024-05-12T06:08:21Z) - Partial Federated Learning [26.357723187375665]
Federated Learning (FL) is a popular algorithm to train machine learning models on user data constrained to edge devices.
We propose a new algorithm called Partial Federated Learning (PartialFL), where a machine learning model is trained using data where a subset of data modalities can be made available to the server.
arXiv Detail & Related papers (2024-03-03T21:04:36Z) - Unsupervised anomalies detection in IIoT edge devices networks using
federated learning [0.0]
Federated learning(FL) as a distributed machine learning approach performs training of a machine learning model on the device that gathered the data itself.
In this paper, we leverage the benefits of FL and implemented Fedavg algorithm on a recent dataset that represent the modern IoT/ IIoT device networks.
We also evaluated some shortcomings of Fedavg such as unfairness that happens during the training when struggling devices do not participate for every stage of training.
arXiv Detail & Related papers (2023-08-23T14:53:38Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Applied Federated Learning: Architectural Design for Robust and
Efficient Learning in Privacy Aware Settings [0.8454446648908585]
The classical machine learning paradigm requires the aggregation of user data in a central location.
Centralization of data poses risks, including a heightened risk of internal and external security incidents.
Federated learning with differential privacy is designed to avoid the server-side centralization pitfall.
arXiv Detail & Related papers (2022-06-02T00:30:04Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Federated Learning-based Active Authentication on Mobile Devices [98.23904302910022]
User active authentication on mobile devices aims to learn a model that can correctly recognize the enrolled user based on device sensor information.
We propose a novel user active authentication training, termed as Federated Active Authentication (FAA)
We show that existing FL/SL methods are suboptimal for FAA as they rely on the data to be distributed homogeneously.
arXiv Detail & Related papers (2021-04-14T22:59:08Z) - Think Locally, Act Globally: Federated Learning with Local and Global
Representations [92.68484710504666]
Federated learning is a method of training models on private data distributed over multiple devices.
We propose a new federated learning algorithm that jointly learns compact local representations on each device.
We also evaluate on the task of personalized mood prediction from real-world mobile data where privacy is key.
arXiv Detail & Related papers (2020-01-06T12:40:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.