FLASHE: Additively Symmetric Homomorphic Encryption for Cross-Silo
Federated Learning
- URL: http://arxiv.org/abs/2109.00675v1
- Date: Thu, 2 Sep 2021 02:36:04 GMT
- Title: FLASHE: Additively Symmetric Homomorphic Encryption for Cross-Silo
Federated Learning
- Authors: Zhifeng Jiang, Wei Wang, Yang Liu
- Abstract summary: Homomorphic encryption (HE) is a promising privacy-preserving technique for cross-silo federated learning (FL)
Homomorphic encryption (HE) is a promising privacy-preserving technique for cross-silo federated learning (FL)
- Score: 9.177048551836897
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Homomorphic encryption (HE) is a promising privacy-preserving technique for
cross-silo federated learning (FL), where organizations perform collaborative
model training on decentralized data. Despite the strong privacy guarantee,
general HE schemes result in significant computation and communication
overhead. Prior works employ batch encryption to address this problem, but it
is still suboptimal in mitigating communication overhead and is incompatible
with sparsification techniques.
In this paper, we propose FLASHE, an HE scheme tailored for cross-silo FL. To
capture the minimum requirements of security and functionality, FLASHE drops
the asymmetric-key design and only involves modular addition operations with
random numbers. Depending on whether to accommodate sparsification techniques,
FLASHE is optimized in computation efficiency with different approaches. We
have implemented FLASHE as a pluggable module atop FATE, an industrial platform
for cross-silo FL. Compared to plaintext training, FLASHE slightly increases
the training time by $\leq6\%$, with no communication overhead.
Related papers
- QuanCrypt-FL: Quantized Homomorphic Encryption with Pruning for Secure Federated Learning [0.48342038441006796]
We propose QuanCrypt-FL, a novel algorithm that combines low-bit quantization and pruning techniques to enhance protection against attacks.
We validate our approach on MNIST, CIFAR-10, and CIFAR-100 datasets, demonstrating superior performance compared to state-of-the-art methods.
QuanCrypt-FL achieves up to 9x faster encryption, 16x faster decryption, and 1.5x faster inference compared to BatchCrypt, with training time reduced by up to 3x.
arXiv Detail & Related papers (2024-11-08T01:46:00Z) - SpaFL: Communication-Efficient Federated Learning with Sparse Models and Low computational Overhead [75.87007729801304]
SpaFL: a communication-efficient FL framework is proposed to optimize sparse model structures with low computational overhead.
Experiments show that SpaFL improves accuracy while requiring much less communication and computing resources compared to sparse baselines.
arXiv Detail & Related papers (2024-06-01T13:10:35Z) - Communication Efficient ConFederated Learning: An Event-Triggered SAGA
Approach [67.27031215756121]
Federated learning (FL) is a machine learning paradigm that targets model training without gathering the local data over various data sources.
Standard FL, which employs a single server, can only support a limited number of users, leading to degraded learning capability.
In this work, we consider a multi-server FL framework, referred to as emphConfederated Learning (CFL) in order to accommodate a larger number of users.
arXiv Detail & Related papers (2024-02-28T03:27:10Z) - Federated Learning is Better with Non-Homomorphic Encryption [1.4110007887109783]
Federated Learning (FL) offers a paradigm that empowers distributed AI model training without collecting raw data.
One of the popular methodologies is employing Homomorphic Encryption (HE)
We propose an innovative framework that synergizes permutation-based compressors with Classical Cryptography.
arXiv Detail & Related papers (2023-12-04T17:37:41Z) - Federated Nearest Neighbor Machine Translation [66.8765098651988]
In this paper, we propose a novel federated nearest neighbor (FedNN) machine translation framework.
FedNN leverages one-round memorization-based interaction to share knowledge across different clients.
Experiments show that FedNN significantly reduces computational and communication costs compared with FedAvg.
arXiv Detail & Related papers (2023-02-23T18:04:07Z) - Effect of Homomorphic Encryption on the Performance of Training
Federated Learning Generative Adversarial Networks [10.030986278376567]
A Generative Adversarial Network (GAN) is a deep-learning generative model in the field of Machine Learning (ML)
In certain fields, such as medicine, the training data may be hospital patient records that are stored across different hospitals.
This paper will focus on the performance loss of training an FL-GAN with three different types of Homomorphic Encryption.
arXiv Detail & Related papers (2022-07-01T08:35:10Z) - Desirable Companion for Vertical Federated Learning: New Zeroth-Order
Gradient Based Algorithm [140.25480610981504]
A complete list of metrics to evaluate VFL algorithms should include model applicability, privacy, communication, and computation efficiency.
We propose a novel VFL framework with black-box scalability, which is inseparably inseparably scalable.
arXiv Detail & Related papers (2022-03-19T13:55:47Z) - HAFLO: GPU-Based Acceleration for Federated Logistic Regression [5.866156163019742]
In this paper, we propose HAFLO, a GPU-based solution to improve the performance of federated learning (FLR)
The core idea of HAFLO is to summarize a set of performance-critical homomorphic operators used by FLR and accelerate the execution of these operators through a joint optimization of storage, IO, and computation.
Preliminary results show that our acceleration on FATE, a popular FL framework, achieves a 49.9$times$ speedup for heterogeneous LR and 88.4$times$ for homogeneous LR.
arXiv Detail & Related papers (2021-07-29T07:46:49Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL):
Performance Analysis and Resource Allocation [119.19061102064497]
We propose a decentralized FL framework by integrating blockchain into FL, namely, blockchain assisted decentralized federated learning (BLADE-FL)
In a round of the proposed BLADE-FL, each client broadcasts its trained model to other clients, competes to generate a block based on the received models, and then aggregates the models from the generated block before its local training of the next round.
We explore the impact of lazy clients on the learning performance of BLADE-FL, and characterize the relationship among the optimal K, the learning parameters, and the proportion of lazy clients.
arXiv Detail & Related papers (2021-01-18T07:19:08Z) - Blockchain Assisted Decentralized Federated Learning (BLADE-FL) with
Lazy Clients [124.48732110742623]
We propose a novel framework by integrating blockchain into Federated Learning (FL)
BLADE-FL has a good performance in terms of privacy preservation, tamper resistance, and effective cooperation of learning.
It gives rise to a new problem of training deficiency, caused by lazy clients who plagiarize others' trained models and add artificial noises to conceal their cheating behaviors.
arXiv Detail & Related papers (2020-12-02T12:18:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.