FLaPS: Federated Learning and Privately Scaling
- URL: http://arxiv.org/abs/2009.06005v1
- Date: Sun, 13 Sep 2020 14:20:17 GMT
- Title: FLaPS: Federated Learning and Privately Scaling
- Authors: Sudipta Paul, Poushali Sengupta and Subhankar Mishra
- Abstract summary: Federated learning (FL) is a distributed learning process where the model is transferred to the devices that posses data.
We present Federated Learning and Privately Scaling (FLaPS) architecture, which improves scalability as well as the security and privacy of the system.
- Score: 3.618133010429131
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated learning (FL) is a distributed learning process where the model
(weights and checkpoints) is transferred to the devices that posses data rather
than the classical way of transferring and aggregating the data centrally. In
this way, sensitive data does not leave the user devices. FL uses the FedAvg
algorithm, which is trained in the iterative model averaging way, on the
non-iid and unbalanced distributed data, without depending on the data
quantity. Some issues with the FL are, 1) no scalability, as the model is
iteratively trained over all the devices, which amplifies with device drops; 2)
security and privacy trade-off of the learning process still not robust enough
and 3) overall communication efficiency and the cost are higher. To mitigate
these challenges we present Federated Learning and Privately Scaling (FLaPS)
architecture, which improves scalability as well as the security and privacy of
the system. The devices are grouped into clusters which further gives better
privacy scaled turn around time to finish a round of training. Therefore, even
if a device gets dropped in the middle of training, the whole process can be
started again after a definite amount of time. The data and model both are
communicated using differentially private reports with iterative shuffling
which provides a better privacy-utility trade-off. We evaluated FLaPS on MNIST,
CIFAR10, and TINY-IMAGENET-200 dataset using various CNN models. Experimental
results prove FLaPS to be an improved, time and privacy scaled environment
having better and comparable after-learning-parameters with respect to the
central and FL models.
Related papers
- MultiConfederated Learning: Inclusive Non-IID Data handling with Decentralized Federated Learning [1.2726316791083532]
Federated Learning (FL) has emerged as a prominent privacy-preserving technique for enabling use cases like confidential clinical machine learning.
FL operates by aggregating models trained by remote devices which owns the data.
We propose MultiConfederated Learning: a decentralized FL framework which is designed to handle non-IID data.
arXiv Detail & Related papers (2024-04-20T16:38:26Z) - Adaptive Model Pruning and Personalization for Federated Learning over
Wireless Networks [72.59891661768177]
Federated learning (FL) enables distributed learning across edge devices while protecting data privacy.
We consider a FL framework with partial model pruning and personalization to overcome these challenges.
This framework splits the learning model into a global part with model pruning shared with all devices to learn data representations and a personalized part to be fine-tuned for a specific device.
arXiv Detail & Related papers (2023-09-04T21:10:45Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Online Data Selection for Federated Learning with Limited Storage [53.46789303416799]
Federated Learning (FL) has been proposed to achieve distributed machine learning among networked devices.
The impact of on-device storage on the performance of FL is still not explored.
In this work, we take the first step to consider the online data selection for FL with limited on-device storage.
arXiv Detail & Related papers (2022-09-01T03:27:33Z) - Acceleration of Federated Learning with Alleviated Forgetting in Local
Training [61.231021417674235]
Federated learning (FL) enables distributed optimization of machine learning models while protecting privacy.
We propose FedReg, an algorithm to accelerate FL with alleviated knowledge forgetting in the local training stage.
Our experiments demonstrate that FedReg not only significantly improves the convergence rate of FL, especially when the neural network architecture is deep.
arXiv Detail & Related papers (2022-03-05T02:31:32Z) - Learnings from Federated Learning in the Real world [19.149989896466852]
Federated Learning (FL) applied to real world data may suffer from several idiosyncrasies.
Data across devices could be distributed such that there are some "heavy devices" with large amounts of data while there are many "light users" with only a handful of data points.
We evaluate the impact of such idiosyncrasies on Natural Language Understanding (NLU) models trained using FL.
arXiv Detail & Related papers (2022-02-08T15:21:31Z) - Federated Learning-based Active Authentication on Mobile Devices [98.23904302910022]
User active authentication on mobile devices aims to learn a model that can correctly recognize the enrolled user based on device sensor information.
We propose a novel user active authentication training, termed as Federated Active Authentication (FAA)
We show that existing FL/SL methods are suboptimal for FAA as they rely on the data to be distributed homogeneously.
arXiv Detail & Related papers (2021-04-14T22:59:08Z) - Over-the-Air Federated Learning from Heterogeneous Data [107.05618009955094]
Federated learning (FL) is a framework for distributed learning of centralized models.
We develop a Convergent OTA FL (COTAF) algorithm which enhances the common local gradient descent (SGD) FL algorithm.
We numerically show that the precoding induced by COTAF notably improves the convergence rate and the accuracy of models trained via OTA FL.
arXiv Detail & Related papers (2020-09-27T08:28:25Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.