Federated Learning Versus Classical Machine Learning: A Convergence
Comparison
- URL: http://arxiv.org/abs/2107.10976v1
- Date: Thu, 22 Jul 2021 17:14:35 GMT
- Title: Federated Learning Versus Classical Machine Learning: A Convergence
Comparison
- Authors: Muhammad Asad, Ahmed Moustafa, and Takayuki Ito
- Abstract summary: In the past few decades, machine learning has revolutionized data processing for large scale applications.
In particular, the federated learning allows participants to collaboratively train the local models on local data without revealing their sensitive information to the central cloud server.
The simulation results demonstrate that federated learning achieves higher convergence within limited communication rounds while maintaining participants' anonymity.
- Score: 7.730827805192975
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In the past few decades, machine learning has revolutionized data processing
for large scale applications. Simultaneously, increasing privacy threats in
trending applications led to the redesign of classical data training models. In
particular, classical machine learning involves centralized data training,
where the data is gathered, and the entire training process executes at the
central server. Despite significant convergence, this training involves several
privacy threats on participants' data when shared with the central cloud
server. To this end, federated learning has achieved significant importance
over distributed data training. In particular, the federated learning allows
participants to collaboratively train the local models on local data without
revealing their sensitive information to the central cloud server. In this
paper, we perform a convergence comparison between classical machine learning
and federated learning on two publicly available datasets, namely,
logistic-regression-MNIST dataset and image-classification-CIFAR-10 dataset.
The simulation results demonstrate that federated learning achieves higher
convergence within limited communication rounds while maintaining participants'
anonymity. We hope that this research will show the benefits and help federated
learning to be implemented widely.
Related papers
- Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Benchmarking FedAvg and FedCurv for Image Classification Tasks [1.376408511310322]
This paper focuses on the problem of statistical heterogeneity of the data in the same federated network.
Several Federated Learning algorithms, such as FedAvg, FedProx and Federated Curvature (FedCurv) have already been proposed.
As a side product of this work, we release the non-IID version of the datasets we used so to facilitate further comparisons from the FL community.
arXiv Detail & Related papers (2023-03-31T10:13:01Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - Certified Robustness in Federated Learning [54.03574895808258]
We study the interplay between federated training, personalization, and certified robustness.
We find that the simple federated averaging technique is effective in building not only more accurate, but also more certifiably-robust models.
arXiv Detail & Related papers (2022-06-06T12:10:53Z) - FedILC: Weighted Geometric Mean and Invariant Gradient Covariance for
Federated Learning on Non-IID Data [69.0785021613868]
Federated learning is a distributed machine learning approach which enables a shared server model to learn by aggregating the locally-computed parameter updates with the training data from spatially-distributed client silos.
We propose the Federated Invariant Learning Consistency (FedILC) approach, which leverages the gradient covariance and the geometric mean of Hessians to capture both inter-silo and intra-silo consistencies.
This is relevant to various fields such as medical healthcare, computer vision, and the Internet of Things (IoT)
arXiv Detail & Related papers (2022-05-19T03:32:03Z) - Asynchronous Collaborative Learning Across Data Silos [9.094748832034746]
We propose a framework to enable asynchronous collaborative training of machine learning models across data silos.
This allows data science teams to collaboratively train a machine learning model, without sharing data with one another.
arXiv Detail & Related papers (2022-03-23T18:00:19Z) - Comparative assessment of federated and centralized machine learning [0.0]
Federated Learning (FL) is a privacy preserving machine learning scheme, where training happens with data federated across devices.
In this paper, we discuss the various factors that affect the federated learning training, because of the non-IID distributed nature of the data.
We show that federated learning does have an advantage in cost when the model sizes to be trained are not reasonably large.
arXiv Detail & Related papers (2022-02-03T11:20:47Z) - DQRE-SCnet: A novel hybrid approach for selecting users in Federated
Learning with Deep-Q-Reinforcement Learning based on Spectral Clustering [1.174402845822043]
Machine learning models based on sensitive data in the real-world promise advances in areas ranging from medical screening to disease outbreaks, agriculture, industry, defense science, and more.
In many applications, learning participant communication rounds benefit from collecting their own private data sets, teaching detailed machine learning models on the real data, and sharing the benefits of using these models.
Due to existing privacy and security concerns, most people avoid sensitive data sharing for training. Without each user demonstrating their local data to a central server, Federated Learning allows various parties to train a machine learning algorithm on their shared data jointly.
arXiv Detail & Related papers (2021-11-07T15:14:29Z) - FedOCR: Communication-Efficient Federated Learning for Scene Text
Recognition [76.26472513160425]
We study how to make use of decentralized datasets for training a robust scene text recognizer.
To make FedOCR fairly suitable to be deployed on end devices, we make two improvements including using lightweight models and hashing techniques.
arXiv Detail & Related papers (2020-07-22T14:30:50Z) - Concentrated Differentially Private and Utility Preserving Federated
Learning [24.239992194656164]
Federated learning is a machine learning setting where a set of edge devices collaboratively train a model under the orchestration of a central server.
In this paper, we develop a federated learning approach that addresses the privacy challenge without much degradation on model utility.
We provide a tight end-to-end privacy guarantee of our approach and analyze its theoretical convergence rates.
arXiv Detail & Related papers (2020-03-30T19:20:42Z) - Federated Residual Learning [53.77128418049985]
We study a new form of federated learning where the clients train personalized local models and make predictions jointly with the server-side shared model.
Using this new federated learning framework, the complexity of the central shared model can be minimized while still gaining all the performance benefits that joint training provides.
arXiv Detail & Related papers (2020-03-28T19:55:24Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.