HAFLO: GPU-Based Acceleration for Federated Logistic Regression
- URL: http://arxiv.org/abs/2107.13797v1
- Date: Thu, 29 Jul 2021 07:46:49 GMT
- Title: HAFLO: GPU-Based Acceleration for Federated Logistic Regression
- Authors: Xiaodian Cheng, Wanhang Lu, Xinyang Huang, Shuihai Hu and Kai Chen
- Abstract summary: In this paper, we propose HAFLO, a GPU-based solution to improve the performance of federated learning (FLR)
The core idea of HAFLO is to summarize a set of performance-critical homomorphic operators used by FLR and accelerate the execution of these operators through a joint optimization of storage, IO, and computation.
Preliminary results show that our acceleration on FATE, a popular FL framework, achieves a 49.9$times$ speedup for heterogeneous LR and 88.4$times$ for homogeneous LR.
- Score: 5.866156163019742
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In recent years, federated learning (FL) has been widely applied for
supporting decentralized collaborative learning scenarios. Among existing FL
models, federated logistic regression (FLR) is a widely used statistic model
and has been used in various industries. To ensure data security and user
privacy, FLR leverages homomorphic encryption (HE) to protect the exchanged
data among different collaborative parties. However, HE introduces significant
computational overhead (i.e., the cost of data encryption/decryption and
calculation over encrypted data), which eventually becomes the performance
bottleneck of the whole system. In this paper, we propose HAFLO, a GPU-based
solution to improve the performance of FLR. The core idea of HAFLO is to
summarize a set of performance-critical homomorphic operators (HO) used by FLR
and accelerate the execution of these operators through a joint optimization of
storage, IO, and computation. The preliminary results show that our
acceleration on FATE, a popular FL framework, achieves a 49.9$\times$ speedup
for heterogeneous LR and 88.4$\times$ for homogeneous LR.
Related papers
- Digital Twin-Assisted Federated Learning with Blockchain in Multi-tier Computing Systems [67.14406100332671]
In Industry 4.0 systems, resource-constrained edge devices engage in frequent data interactions.
This paper proposes a digital twin (DT) and federated digital twin (FL) scheme.
The efficacy of our proposed cooperative interference-based FL process has been verified through numerical analysis.
arXiv Detail & Related papers (2024-11-04T17:48:02Z) - Can We Theoretically Quantify the Impacts of Local Updates on the Generalization Performance of Federated Learning? [50.03434441234569]
Federated Learning (FL) has gained significant popularity due to its effectiveness in training machine learning models across diverse sites without requiring direct data sharing.
While various algorithms have shown that FL with local updates is a communication-efficient distributed learning framework, the generalization performance of FL with local updates has received comparatively less attention.
arXiv Detail & Related papers (2024-09-05T19:00:18Z) - Lightweight Industrial Cohorted Federated Learning for Heterogeneous Assets [0.0]
Federated Learning (FL) is the most widely adopted collaborative learning approach for training decentralized Machine Learning (ML) models.
However, since great data similarity or homogeneity is taken for granted in all FL tasks, FL is still not specifically designed for the industrial setting.
We propose a Lightweight Industrial Cohorted FL (LICFL) algorithm that uses model parameters for cohorting without any additional on-edge (clientlevel) computations and communications.
arXiv Detail & Related papers (2024-07-25T12:48:56Z) - FedLPS: Heterogeneous Federated Learning for Multiple Tasks with Local
Parameter Sharing [14.938531944702193]
We propose Federated Learning with Local Heterogeneous Sharing (FedLPS)
FedLPS uses transfer learning to facilitate the deployment of multiple tasks on a single device by dividing the local model into a shareable encoder and task-specific encoders.
FedLPS significantly outperforms the state-of-the-art (SOTA) FL frameworks by up to 4.88% and reduces the computational resource consumption by 21.3%.
arXiv Detail & Related papers (2024-02-13T16:30:30Z) - FS-Real: Towards Real-World Cross-Device Federated Learning [60.91678132132229]
Federated Learning (FL) aims to train high-quality models in collaboration with distributed clients while not uploading their local data.
There is still a considerable gap between the flourishing FL research and real-world scenarios, mainly caused by the characteristics of heterogeneous devices and its scales.
We propose an efficient and scalable prototyping system for real-world cross-device FL, FS-Real.
arXiv Detail & Related papers (2023-03-23T15:37:17Z) - FedHiSyn: A Hierarchical Synchronous Federated Learning Framework for
Resource and Data Heterogeneity [56.82825745165945]
Federated Learning (FL) enables training a global model without sharing the decentralized raw data stored on multiple devices to protect data privacy.
We propose a hierarchical synchronous FL framework, i.e., FedHiSyn, to tackle the problems of straggler effects and outdated models.
We evaluate the proposed framework based on MNIST, EMNIST, CIFAR10 and CIFAR100 datasets and diverse heterogeneous settings of devices.
arXiv Detail & Related papers (2022-06-21T17:23:06Z) - Federated Learning on Heterogeneous and Long-Tailed Data via Classifier
Re-Training with Federated Features [24.679535905451758]
Federated learning (FL) provides a privacy-preserving solution for distributed machine learning tasks.
One challenging problem that severely damages the performance of FL models is the co-occurrence of data heterogeneity and long-tail distribution.
We propose a novel privacy-preserving FL method for heterogeneous and long-tailed data via Federated Re-training with Federated Features (CReFF)
arXiv Detail & Related papers (2022-04-28T10:35:11Z) - Local Learning Matters: Rethinking Data Heterogeneity in Federated
Learning [61.488646649045215]
Federated learning (FL) is a promising strategy for performing privacy-preserving, distributed learning with a network of clients (i.e., edge devices)
arXiv Detail & Related papers (2021-11-28T19:03:39Z) - FLASHE: Additively Symmetric Homomorphic Encryption for Cross-Silo
Federated Learning [9.177048551836897]
Homomorphic encryption (HE) is a promising privacy-preserving technique for cross-silo federated learning (FL)
Homomorphic encryption (HE) is a promising privacy-preserving technique for cross-silo federated learning (FL)
arXiv Detail & Related papers (2021-09-02T02:36:04Z) - Secure Neuroimaging Analysis using Federated Learning with Homomorphic
Encryption [14.269757725951882]
Federated learning (FL) enables distributed computation of machine learning models over disparate, remote data sources.
Recent membership attacks show that private or sensitive personal data can sometimes be leaked or inferred when model parameters or summary statistics are shared with a central site.
We propose a framework for secure FL using fully-homomorphic encryption (FHE)
arXiv Detail & Related papers (2021-08-07T12:15:52Z) - FedML: A Research Library and Benchmark for Federated Machine Learning [55.09054608875831]
Federated learning (FL) is a rapidly growing research field in machine learning.
Existing FL libraries cannot adequately support diverse algorithmic development.
We introduce FedML, an open research library and benchmark to facilitate FL algorithm development and fair performance comparison.
arXiv Detail & Related papers (2020-07-27T13:02:08Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.