Comparative assessment of federated and centralized machine learning
- URL: http://arxiv.org/abs/2202.01529v1
- Date: Thu, 3 Feb 2022 11:20:47 GMT
- Title: Comparative assessment of federated and centralized machine learning
- Authors: Ibrahim Abdul Majeed, Sagar Kaushik, Aniruddha Bardhan, Venkata Siva
Kumar Tadi, Hwang-Ki Min, Karthikeyan Kumaraguru, Rajasekhara Duvvuru Muni
- Abstract summary: Federated Learning (FL) is a privacy preserving machine learning scheme, where training happens with data federated across devices.
In this paper, we discuss the various factors that affect the federated learning training, because of the non-IID distributed nature of the data.
We show that federated learning does have an advantage in cost when the model sizes to be trained are not reasonably large.
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a privacy preserving machine learning scheme,
where training happens with data federated across devices and not leaving them
to sustain user privacy. This is ensured by making the untrained or partially
trained models to reach directly the individual devices and getting locally
trained "on-device" using the device owned data, and the server aggregating all
the partially trained model learnings to update a global model. Although almost
all the model learning schemes in the federated learning setup use gradient
descent, there are certain characteristic differences brought about by the
non-IID nature of the data availability, that affects the training in
comparison to the centralized schemes. In this paper, we discuss the various
factors that affect the federated learning training, because of the non-IID
distributed nature of the data, as well as the inherent differences in the
federating learning approach as against the typical centralized gradient
descent techniques. We empirically demonstrate the effect of number of samples
per device and the distribution of output labels on federated learning. In
addition to the privacy advantage we seek through federated learning, we also
study if there is a cost advantage while using federated learning frameworks.
We show that federated learning does have an advantage in cost when the model
sizes to be trained are not reasonably large. All in all, we present the need
for careful design of model for both performance and cost.
Related papers
- MultiConfederated Learning: Inclusive Non-IID Data handling with Decentralized Federated Learning [1.2726316791083532]
Federated Learning (FL) has emerged as a prominent privacy-preserving technique for enabling use cases like confidential clinical machine learning.
FL operates by aggregating models trained by remote devices which owns the data.
We propose MultiConfederated Learning: a decentralized FL framework which is designed to handle non-IID data.
arXiv Detail & Related papers (2024-04-20T16:38:26Z) - Federated Bayesian Network Ensembles [3.24530181403525]
Federated learning allows us to run machine learning algorithms on decentralized data when data sharing is not permitted due to privacy concerns.
We show that FBNE is a potentially useful tool within the federated learning toolbox.
We discuss the advantages and disadvantages of this approach in terms of time complexity, model accuracy, privacy protection, and model interpretability.
arXiv Detail & Related papers (2024-02-19T13:52:37Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Federated Learning and Meta Learning: Approaches, Applications, and
Directions [94.68423258028285]
In this tutorial, we present a comprehensive review of FL, meta learning, and federated meta learning (FedMeta)
Unlike other tutorial papers, our objective is to explore how FL, meta learning, and FedMeta methodologies can be designed, optimized, and evolved, and their applications over wireless networks.
arXiv Detail & Related papers (2022-10-24T10:59:29Z) - Federated Self-Supervised Learning in Heterogeneous Settings: Limits of
a Baseline Approach on HAR [0.5039813366558306]
We show that standard lightweight autoencoder and standard Federated Averaging fail to learn a robust representation for Human Activity Recognition.
These findings advocate for a more intensive research effort in Federated Self Supervised Learning.
arXiv Detail & Related papers (2022-07-17T14:15:45Z) - Certified Robustness in Federated Learning [54.03574895808258]
We study the interplay between federated training, personalization, and certified robustness.
We find that the simple federated averaging technique is effective in building not only more accurate, but also more certifiably-robust models.
arXiv Detail & Related papers (2022-06-06T12:10:53Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - DQRE-SCnet: A novel hybrid approach for selecting users in Federated
Learning with Deep-Q-Reinforcement Learning based on Spectral Clustering [1.174402845822043]
Machine learning models based on sensitive data in the real-world promise advances in areas ranging from medical screening to disease outbreaks, agriculture, industry, defense science, and more.
In many applications, learning participant communication rounds benefit from collecting their own private data sets, teaching detailed machine learning models on the real data, and sharing the benefits of using these models.
Due to existing privacy and security concerns, most people avoid sensitive data sharing for training. Without each user demonstrating their local data to a central server, Federated Learning allows various parties to train a machine learning algorithm on their shared data jointly.
arXiv Detail & Related papers (2021-11-07T15:14:29Z) - Constrained Differentially Private Federated Learning for Low-bandwidth
Devices [1.1470070927586016]
This paper presents a novel privacy-preserving federated learning scheme.
It provides theoretical privacy guarantees, as it is based on Differential Privacy.
It reduces the upstream and downstream bandwidth by up to 99.9% compared to standard federated learning.
arXiv Detail & Related papers (2021-02-27T22:25:06Z) - WAFFLe: Weight Anonymized Factorization for Federated Learning [88.44939168851721]
In domains where data are sensitive or private, there is great value in methods that can learn in a distributed manner without the data ever leaving the local devices.
We propose Weight Anonymized Factorization for Federated Learning (WAFFLe), an approach that combines the Indian Buffet Process with a shared dictionary of weight factors for neural networks.
arXiv Detail & Related papers (2020-08-13T04:26:31Z) - Multi-Center Federated Learning [62.57229809407692]
This paper proposes a novel multi-center aggregation mechanism for federated learning.
It learns multiple global models from the non-IID user data and simultaneously derives the optimal matching between users and centers.
Our experimental results on benchmark datasets show that our method outperforms several popular federated learning methods.
arXiv Detail & Related papers (2020-05-03T09:14:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.