Advancing Practical Homomorphic Encryption for Federated Learning: Theoretical Guarantees and Efficiency Optimizations
- URL: http://arxiv.org/abs/2509.20476v1
- Date: Wed, 24 Sep 2025 18:43:52 GMT
- Title: Advancing Practical Homomorphic Encryption for Federated Learning: Theoretical Guarantees and Efficiency Optimizations
- Authors: Ren-Yi Huang, Dumindu Samaraweera, Prashant Shekhar, J. Morris Chang,
- Abstract summary: Federated Learning (FL) enables collaborative model training while preserving data privacy by keeping raw data locally stored on client devices.<n>Recent studies reveal that sharing model gradients creates vulnerability to Model Inversion Attacks.<n>This paper presents a framework for theoretical analysis of the underlying principles of selective encryption as a defense against model inversion attacks.
- Score: 1.8245990984484237
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Federated Learning (FL) enables collaborative model training while preserving data privacy by keeping raw data locally stored on client devices, preventing access from other clients or the central server. However, recent studies reveal that sharing model gradients creates vulnerability to Model Inversion Attacks, particularly Deep Leakage from Gradients (DLG), which reconstructs private training data from shared gradients. While Homomorphic Encryption has been proposed as a promising defense mechanism to protect gradient privacy, fully encrypting all model gradients incurs high computational overhead. Selective encryption approaches aim to balance privacy protection with computational efficiency by encrypting only specific gradient components. However, the existing literature largely overlooks a theoretical exploration of the spectral behavior of encrypted versus unencrypted parameters, relying instead primarily on empirical evaluations. To address this gap, this paper presents a framework for theoretical analysis of the underlying principles of selective encryption as a defense against model inversion attacks. We then provide a comprehensive empirical study that identifies and quantifies the critical factors, such as model complexity, encryption ratios, and exposed gradients, that influence defense effectiveness. Our theoretical framework clarifies the relationship between gradient selection and privacy preservation, while our experimental evaluation demonstrates how these factors shape the robustness of defenses against model inversion attacks. Collectively, these contributions advance the understanding of selective encryption mechanisms and offer principled guidance for designing efficient, scalable, privacy-preserving federated learning systems.
Related papers
- SRFed: Mitigating Poisoning Attacks in Privacy-Preserving Federated Learning with Heterogeneous Data [5.7335377562335275]
Federated Learning (FL) enables collaborative model training without exposing clients' private data, and has been widely adopted in privacy-sensitive scenarios.<n>It faces two critical security threats: curious servers that may launch inference attacks to reconstruct clients' private data, and compromised clients that can launch poisoning attacks to disrupt model aggregation.<n>We propose SRFed, an efficient Byzantine-robust and privacy-preserving FL framework for Non-IID scenarios.
arXiv Detail & Related papers (2026-02-18T14:14:38Z) - Physical Layer Deception based on Semantic Distortion [58.38604209714828]
Physical layer deception (PLD) is a framework that integrates physical layer security (PLS) with deception techniques.<n>We extend this framework to a semantic communication model and conduct a theoretical analysis using semantic distortion as the performance metric.
arXiv Detail & Related papers (2025-10-16T18:23:35Z) - Evaluating Selective Encryption Against Gradient Inversion Attacks [15.000605214632243]
Gradient inversion attacks pose significant privacy threats to distributed training frameworks such as federated learning.<n>This paper systematically evaluates selective encryption methods with different significance metrics against state-of-the-art attacks.
arXiv Detail & Related papers (2025-08-06T07:31:43Z) - Secure Distributed Learning for CAVs: Defending Against Gradient Leakage with Leveled Homomorphic Encryption [0.0]
Homomorphic Encryption (HE) offers a promising alternative to Differential Privacy (DP) and Secure Multi-Party Computation (SMPC)<n>We evaluate various HE schemes to identify the most suitable for Federated Learning (FL) in resource-constrained environments.<n>We develop a full HE-based FL pipeline that effectively mitigates Deep Leakage from Gradients (DLG) attacks while preserving model accuracy.
arXiv Detail & Related papers (2025-06-09T16:12:18Z) - Encrypted Large Model Inference: The Equivariant Encryption Paradigm [18.547945807599543]
We introduce Equivariant Encryption (EE), a novel paradigm designed to enable secure, "blind" inference on encrypted data with near zero performance overhead.<n>Unlike fully homomorphic approaches that encrypt the entire computational graph, EE selectively obfuscates critical internal representations within neural network layers.<n>EE maintains high fidelity and throughput, effectively bridging the gap between robust data confidentiality and the stringent efficiency requirements of modern, large scale model inference.
arXiv Detail & Related papers (2025-02-03T03:05:20Z) - FEDLAD: Federated Evaluation of Deep Leakage Attacks and Defenses [50.921333548391345]
Federated Learning is a privacy preserving decentralized machine learning paradigm.<n>Recent research has revealed that private ground truth data can be recovered through a gradient technique known as Deep Leakage.<n>This paper introduces the FEDLAD Framework (Federated Evaluation of Deep Leakage Attacks and Defenses), a comprehensive benchmark for evaluating Deep Leakage attacks and defenses.
arXiv Detail & Related papers (2024-11-05T11:42:26Z) - GI-NAS: Boosting Gradient Inversion Attacks Through Adaptive Neural Architecture Search [52.27057178618773]
Gradient Inversion Attacks invert the transmitted gradients in Federated Learning (FL) systems to reconstruct the sensitive data of local clients.<n>A majority of gradient inversion methods rely heavily on explicit prior knowledge, which is often unavailable in realistic scenarios.<n>We propose Neural Architecture Search (GI-NAS), which adaptively searches the network and captures the implicit priors behind neural architectures.
arXiv Detail & Related papers (2024-05-31T09:29:43Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Gradient Leakage Defense with Key-Lock Module for Federated Learning [5.380009458891537]
Federated Learning (FL) is a widely adopted privacy-preserving machine learning approach.<n>Recent findings reveal that privacy may be compromised and sensitive information potentially recovered from shared gradients.<n>We propose a new gradient leakage defense technique that secures arbitrary model architectures using a private key-lock module.
arXiv Detail & Related papers (2023-05-06T16:47:52Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Decentralized Stochastic Optimization with Inherent Privacy Protection [103.62463469366557]
Decentralized optimization is the basic building block of modern collaborative machine learning, distributed estimation and control, and large-scale sensing.
Since involved data, privacy protection has become an increasingly pressing need in the implementation of decentralized optimization algorithms.
arXiv Detail & Related papers (2022-05-08T14:38:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.