Federated Learning: An approach with Hybrid Homomorphic Encryption
- URL: http://arxiv.org/abs/2509.03427v1
- Date: Wed, 03 Sep 2025 15:58:22 GMT
- Title: Federated Learning: An approach with Hybrid Homomorphic Encryption
- Authors: Pedro Correia, Ivan Silva, Ivone Amorim, Eva Maia, Isabel Praça,
- Abstract summary: Federated Learning (FL) is a distributed machine learning approach that promises privacy by keeping the data on the device.<n>We propose the first Hybrid Homomorphic Encryption (HHE) framework for FL that pairs the PASTA symmetric cipher with the BFV FHE scheme.<n>A prototype implementation, developed on top of the Flower FL framework, shows that on independently and identically distributed MNIST dataset with 12 clients and 10 training rounds, the proposed HHE system achieves 97.6% accuracy.
- Score: 1.7181078670359513
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) is a distributed machine learning approach that promises privacy by keeping the data on the device. However, gradient reconstruction and membership-inference attacks show that model updates still leak information. Fully Homomorphic Encryption (FHE) can address those privacy concerns but it suffers from ciphertext expansion and requires prohibitive overhead on resource-constrained devices. We propose the first Hybrid Homomorphic Encryption (HHE) framework for FL that pairs the PASTA symmetric cipher with the BFV FHE scheme. Clients encrypt local model updates with PASTA and send both the lightweight ciphertexts and the PASTA key (itself BFV-encrypted) to the server, which performs a homomorphic evaluation of the decryption circuit of PASTA and aggregates the resulting BFV ciphertexts. A prototype implementation, developed on top of the Flower FL framework, shows that on independently and identically distributed MNIST dataset with 12 clients and 10 training rounds, the proposed HHE system achieves 97.6% accuracy, just 1.3% below plaintext, while reducing client upload bandwidth by over 2,000x and cutting client runtime by 30% compared to a system based solely on the BFV FHE scheme. However, server computational cost increases by roughly 15621x for each client participating in the training phase, a challenge to be addressed in future work.
Related papers
- DictPFL: Efficient and Private Federated Learning on Encrypted Gradients [46.7448838842482]
We present DictPFL, a framework that achieves full gradient protection with minimal overhead.<n>It encrypts every transmitted gradient while keeping non-transmitted parameters local, preserving privacy without heavy computation.<n>Experiments show that DictPFL reduces communication cost by 402-748$times$ and accelerates training by 28-65$times$ compared to fully encrypted FL.
arXiv Detail & Related papers (2025-10-24T01:58:42Z) - FedBit: Accelerating Privacy-Preserving Federated Learning via Bit-Interleaved Packing and Cross-Layer Co-Design [2.255961793913651]
Federated learning (FL) with fully homomorphic encryption (FHE) effectively safeguards data privacy during model aggregation.<n>FedBit is a hardware/software co-designed framework for the Brakerski-Fan-Vercauteren (BFV) scheme.<n>FedBit employs bit-interleaved data packing to embed multiple model parameters into a single ciphertext coefficient.
arXiv Detail & Related papers (2025-09-27T03:58:16Z) - Secure Multi-Key Homomorphic Encryption with Application to Privacy-Preserving Federated Learning [10.862166653863571]
We identify a critical security vulnerability in the CDKS scheme when applied to multiparty secure computation tasks.<n>We propose a new scheme, SMHE, which incorporates a novel masking mechanism into the multi-key BFV and CKKS frameworks.<n>We implement a PPFL application using SMHE and demonstrate it provides significantly improved security with only a modest overhead in runtime evaluation.
arXiv Detail & Related papers (2025-06-25T03:28:25Z) - Efficient Privacy-Preserving Cross-Silo Federated Learning with Multi-Key Homomorphic Encryption [7.332140296779856]
Federated Learning (FL) is susceptible to privacy attacks.<n>Recent studies combined Multi-Key Homomorphic Encryption (MKHE) and FL.<n>We propose MASER, an efficient MKHE-based Privacy-Preserving FL framework.
arXiv Detail & Related papers (2025-05-20T18:08:15Z) - QuanCrypt-FL: Quantized Homomorphic Encryption with Pruning for Secure Federated Learning [0.48342038441006796]
We propose QuanCrypt-FL, a novel algorithm that combines low-bit quantization and pruning techniques to enhance protection against attacks.
We validate our approach on MNIST, CIFAR-10, and CIFAR-100 datasets, demonstrating superior performance compared to state-of-the-art methods.
QuanCrypt-FL achieves up to 9x faster encryption, 16x faster decryption, and 1.5x faster inference compared to BatchCrypt, with training time reduced by up to 3x.
arXiv Detail & Related papers (2024-11-08T01:46:00Z) - Coding-Based Hybrid Post-Quantum Cryptosystem for Non-Uniform Information [53.85237314348328]
We introduce for non-uniform messages a novel hybrid universal network coding cryptosystem (NU-HUNCC)
We show that NU-HUNCC is information-theoretic individually secured against an eavesdropper with access to any subset of the links.
arXiv Detail & Related papers (2024-02-13T12:12:39Z) - Fed-CVLC: Compressing Federated Learning Communications with
Variable-Length Codes [54.18186259484828]
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds.
We show strong evidences that variable-length is beneficial for compression in FL.
We present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response to the dynamics of model updates.
arXiv Detail & Related papers (2024-02-06T07:25:21Z) - FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users [19.209830150036254]
federated learning (FL) technique was developed to mitigate data privacy issues in the traditional machine learning paradigm.
Next-generation FL architectures proposed encryption and anonymization techniques to protect the model updates from the server.
This paper proposes a novel FL algorithm based on a fully homomorphic encryption (FHE) scheme.
arXiv Detail & Related papers (2023-06-08T11:20:00Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption [112.02441503951297]
Privacy-preserving inference of transformer models is on the demand of cloud service users.
We introduce $textitTHE-X$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models.
arXiv Detail & Related papers (2022-06-01T03:49:18Z) - CodedPaddedFL and CodedSecAgg: Straggler Mitigation and Secure
Aggregation in Federated Learning [86.98177890676077]
We present two novel coded federated learning (FL) schemes for linear regression that mitigate the effect of straggling devices.
The first scheme, CodedPaddedFL, mitigates the effect of straggling devices while retaining the privacy level of conventional FL.
The second scheme, CodedSecAgg, provides straggler resiliency and robustness against model inversion attacks.
arXiv Detail & Related papers (2021-12-16T14:26:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.