DictPFL: Efficient and Private Federated Learning on Encrypted Gradients
- URL: http://arxiv.org/abs/2510.21086v1
- Date: Fri, 24 Oct 2025 01:58:42 GMT
- Title: DictPFL: Efficient and Private Federated Learning on Encrypted Gradients
- Authors: Jiaqi Xue, Mayank Kumar, Yuzhang Shang, Shangqian Gao, Rui Ning, Mengxin Zheng, Xiaoqian Jiang, Qian Lou,
- Abstract summary: We present DictPFL, a framework that achieves full gradient protection with minimal overhead.<n>It encrypts every transmitted gradient while keeping non-transmitted parameters local, preserving privacy without heavy computation.<n>Experiments show that DictPFL reduces communication cost by 402-748$times$ and accelerates training by 28-65$times$ compared to fully encrypted FL.
- Score: 46.7448838842482
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) enables collaborative model training across institutions without sharing raw data. However, gradient sharing still risks privacy leakage, such as gradient inversion attacks. Homomorphic Encryption (HE) can secure aggregation but often incurs prohibitive computational and communication overhead. Existing HE-based FL methods sit at two extremes: encrypting all gradients for full privacy at high cost, or partially encrypting gradients to save resources while exposing vulnerabilities. We present DictPFL, a practical framework that achieves full gradient protection with minimal overhead. DictPFL encrypts every transmitted gradient while keeping non-transmitted parameters local, preserving privacy without heavy computation. It introduces two key modules: Decompose-for-Partial-Encrypt (DePE), which decomposes model weights into a static dictionary and an updatable lookup table, only the latter is encrypted and aggregated, while the static dictionary remains local and requires neither sharing nor encryption; and Prune-for-Minimum-Encrypt (PrME), which applies encryption-aware pruning to minimize encrypted parameters via consistent, history-guided masks. Experiments show that DictPFL reduces communication cost by 402-748$\times$ and accelerates training by 28-65$\times$ compared to fully encrypted FL, while outperforming state-of-the-art selective encryption methods by 51-155$\times$ in overhead and 4-19$\times$ in speed. Remarkably, DictPFL's runtime is within 2$\times$ of plaintext FL, demonstrating for the first time, that HE-based private federated learning is practical for real-world deployment. The code is publicly available at https://github.com/UCF-ML-Research/DictPFL.
Related papers
- FedBit: Accelerating Privacy-Preserving Federated Learning via Bit-Interleaved Packing and Cross-Layer Co-Design [2.255961793913651]
Federated learning (FL) with fully homomorphic encryption (FHE) effectively safeguards data privacy during model aggregation.<n>FedBit is a hardware/software co-designed framework for the Brakerski-Fan-Vercauteren (BFV) scheme.<n>FedBit employs bit-interleaved data packing to embed multiple model parameters into a single ciphertext coefficient.
arXiv Detail & Related papers (2025-09-27T03:58:16Z) - DP-Fusion: Token-Level Differentially Private Inference for Large Language Models [51.71591819896191]
Large language models (LLMs) do not preserve privacy at inference-time.<n> DP-Fusion provably bounds the influence a set of tokens in the context can have on the LLM's output.<n>We show that our method creates token-level provably privatized documents with substantially improved theoretical and empirical privacy.
arXiv Detail & Related papers (2025-07-06T20:49:39Z) - Secure Multi-Key Homomorphic Encryption with Application to Privacy-Preserving Federated Learning [18.399679909318394]
We identify a critical security vulnerability in the CDKS scheme when applied to multiparty secure computation tasks.<n>We propose a new scheme, SMHE, which incorporates a novel masking mechanism into the multi-key BFV and CKKS frameworks.<n>We implement a PPFL application using SMHE and demonstrate it provides significantly improved security with only a modest overhead in runtime evaluation.
arXiv Detail & Related papers (2025-06-25T03:28:25Z) - QuanCrypt-FL: Quantized Homomorphic Encryption with Pruning for Secure Federated Learning [0.48342038441006796]
We propose QuanCrypt-FL, a novel algorithm that combines low-bit quantization and pruning techniques to enhance protection against attacks.
We validate our approach on MNIST, CIFAR-10, and CIFAR-100 datasets, demonstrating superior performance compared to state-of-the-art methods.
QuanCrypt-FL achieves up to 9x faster encryption, 16x faster decryption, and 1.5x faster inference compared to BatchCrypt, with training time reduced by up to 3x.
arXiv Detail & Related papers (2024-11-08T01:46:00Z) - FheFL: Fully Homomorphic Encryption Friendly Privacy-Preserving Federated Learning with Byzantine Users [19.209830150036254]
federated learning (FL) technique was developed to mitigate data privacy issues in the traditional machine learning paradigm.
Next-generation FL architectures proposed encryption and anonymization techniques to protect the model updates from the server.
This paper proposes a novel FL algorithm based on a fully homomorphic encryption (FHE) scheme.
arXiv Detail & Related papers (2023-06-08T11:20:00Z) - Federated Nearest Neighbor Machine Translation [66.8765098651988]
In this paper, we propose a novel federated nearest neighbor (FedNN) machine translation framework.
FedNN leverages one-round memorization-based interaction to share knowledge across different clients.
Experiments show that FedNN significantly reduces computational and communication costs compared with FedAvg.
arXiv Detail & Related papers (2023-02-23T18:04:07Z) - THE-X: Privacy-Preserving Transformer Inference with Homomorphic
Encryption [112.02441503951297]
Privacy-preserving inference of transformer models is on the demand of cloud service users.
We introduce $textitTHE-X$, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models.
arXiv Detail & Related papers (2022-06-01T03:49:18Z) - Decepticons: Corrupted Transformers Breach Privacy in Federated Learning
for Language Models [58.631918656336005]
We propose a novel attack that reveals private user text by deploying malicious parameter vectors.
Unlike previous attacks on FL, the attack exploits characteristics of both the Transformer architecture and the token embedding.
arXiv Detail & Related papers (2022-01-29T22:38:21Z) - Achieving Personalized Federated Learning with Sparse Local Models [75.76854544460981]
Federated learning (FL) is vulnerable to heterogeneously distributed data.
To counter this issue, personalized FL (PFL) was proposed to produce dedicated local models for each individual user.
Existing PFL solutions either demonstrate unsatisfactory generalization towards different model architectures or cost enormous extra computation and memory.
We proposeFedSpa, a novel PFL scheme that employs personalized sparse masks to customize sparse local models on the edge.
arXiv Detail & Related papers (2022-01-27T08:43:11Z) - FLASHE: Additively Symmetric Homomorphic Encryption for Cross-Silo
Federated Learning [9.177048551836897]
Homomorphic encryption (HE) is a promising privacy-preserving technique for cross-silo federated learning (FL)
Homomorphic encryption (HE) is a promising privacy-preserving technique for cross-silo federated learning (FL)
arXiv Detail & Related papers (2021-09-02T02:36:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.