Evaluating Selective Encryption Against Gradient Inversion Attacks
- URL: http://arxiv.org/abs/2508.04155v1
- Date: Wed, 06 Aug 2025 07:31:43 GMT
- Title: Evaluating Selective Encryption Against Gradient Inversion Attacks
- Authors: Jiajun Gu, Yuhang Yao, Shuaiqi Wang, Carlee Joe-Wong,
- Abstract summary: Gradient inversion attacks pose significant privacy threats to distributed training frameworks such as federated learning.<n>This paper systematically evaluates selective encryption methods with different significance metrics against state-of-the-art attacks.
- Score: 15.000605214632243
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Gradient inversion attacks pose significant privacy threats to distributed training frameworks such as federated learning, enabling malicious parties to reconstruct sensitive local training data from gradient communications between clients and an aggregation server during the aggregation process. While traditional encryption-based defenses, such as homomorphic encryption, offer strong privacy guarantees without compromising model utility, they often incur prohibitive computational overheads. To mitigate this, selective encryption has emerged as a promising approach, encrypting only a subset of gradient data based on the data's significance under a certain metric. However, there have been few systematic studies on how to specify this metric in practice. This paper systematically evaluates selective encryption methods with different significance metrics against state-of-the-art attacks. Our findings demonstrate the feasibility of selective encryption in reducing computational overhead while maintaining resilience against attacks. We propose a distance-based significance analysis framework that provides theoretical foundations for selecting critical gradient elements for encryption. Through extensive experiments on different model architectures (LeNet, CNN, BERT, GPT-2) and attack types, we identify gradient magnitude as a generally effective metric for protection against optimization-based gradient inversions. However, we also observe that no single selective encryption strategy is universally optimal across all attack scenarios, and we provide guidelines for choosing appropriate strategies for different model architectures and privacy requirements.
Related papers
- DATABench: Evaluating Dataset Auditing in Deep Learning from an Adversarial Perspective [59.66984417026933]
We introduce a novel taxonomy, classifying existing methods based on their reliance on internal features (IF) (inherent to the data) versus external features (EF) (artificially introduced for auditing)<n>We formulate two primary attack types: evasion attacks, designed to conceal the use of a dataset, and forgery attacks, intending to falsely implicate an unused dataset.<n>Building on the understanding of existing methods and attack objectives, we further propose systematic attack strategies: decoupling, removal, and detection for evasion; adversarial example-based methods for forgery.<n>Our benchmark, DATABench, comprises 17 evasion attacks, 5 forgery attacks, and 9
arXiv Detail & Related papers (2025-07-08T03:07:15Z) - Secure Distributed Learning for CAVs: Defending Against Gradient Leakage with Leveled Homomorphic Encryption [0.0]
Homomorphic Encryption (HE) offers a promising alternative to Differential Privacy (DP) and Secure Multi-Party Computation (SMPC)<n>We evaluate various HE schemes to identify the most suitable for Federated Learning (FL) in resource-constrained environments.<n>We develop a full HE-based FL pipeline that effectively mitigates Deep Leakage from Gradients (DLG) attacks while preserving model accuracy.
arXiv Detail & Related papers (2025-06-09T16:12:18Z) - Benchmarking Unified Face Attack Detection via Hierarchical Prompt Tuning [58.16354555208417]
PAD and FFD are proposed to protect face data from physical media-based Presentation Attacks and digital editing-based DeepFakes, respectively.<n>The lack of a Unified Face Attack Detection model to simultaneously handle attacks in these two categories is mainly attributed to two factors.<n>We present a novel Visual-Language Model-based Hierarchical Prompt Tuning Framework that adaptively explores multiple classification criteria from different semantic spaces.
arXiv Detail & Related papers (2025-05-19T16:35:45Z) - Privacy Preserving and Robust Aggregation for Cross-Silo Federated Learning in Non-IID Settings [1.8434042562191815]
Federated Averaging remains the most widely used aggregation strategy in federated learning.<n>Our method relies solely on gradient updates, eliminating the need for any additional client metadata.<n>Our results establish the effectiveness of gradient masking as a practical and secure solution for federated learning.
arXiv Detail & Related papers (2025-03-06T14:06:20Z) - Hierarchical Manifold Projection for Ransomware Detection: A Novel Geometric Approach to Identifying Malicious Encryption Patterns [0.0]
Encryption-based cyber threats continue to evolve, employing increasingly sophisticated techniques to bypass traditional detection mechanisms.<n>A novel classification framework structured through hierarchical manifold projection introduces a mathematical approach to detecting malicious encryption.<n>The proposed methodology transforms encryption sequences into structured manifold embeddings, ensuring classification robustness through non-Euclidean feature separability.
arXiv Detail & Related papers (2025-02-11T23:20:58Z) - Encrypted Large Model Inference: The Equivariant Encryption Paradigm [18.547945807599543]
We introduce Equivariant Encryption (EE), a novel paradigm designed to enable secure, "blind" inference on encrypted data with near zero performance overhead.<n>Unlike fully homomorphic approaches that encrypt the entire computational graph, EE selectively obfuscates critical internal representations within neural network layers.<n>EE maintains high fidelity and throughput, effectively bridging the gap between robust data confidentiality and the stringent efficiency requirements of modern, large scale model inference.
arXiv Detail & Related papers (2025-02-03T03:05:20Z) - Towards Attack-tolerant Federated Learning via Critical Parameter
Analysis [85.41873993551332]
Federated learning systems are susceptible to poisoning attacks when malicious clients send false updates to the central server.
This paper proposes a new defense strategy, FedCPA (Federated learning with Critical Analysis)
Our attack-tolerant aggregation method is based on the observation that benign local models have similar sets of top-k and bottom-k critical parameters, whereas poisoned local models do not.
arXiv Detail & Related papers (2023-08-18T05:37:55Z) - Spatial-Frequency Discriminability for Revealing Adversarial Perturbations [53.279716307171604]
Vulnerability of deep neural networks to adversarial perturbations has been widely perceived in the computer vision community.
Current algorithms typically detect adversarial patterns through discriminative decomposition for natural and adversarial data.
We propose a discriminative detector relying on a spatial-frequency Krawtchouk decomposition.
arXiv Detail & Related papers (2023-05-18T10:18:59Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Learning-based Hybrid Local Search for the Hard-label Textual Attack [53.92227690452377]
We consider a rarely investigated but more rigorous setting, namely hard-label attack, in which the attacker could only access the prediction label.
Based on this observation, we propose a novel hard-label attack, called Learning-based Hybrid Local Search (LHLS) algorithm.
Our LHLS significantly outperforms existing hard-label attacks regarding the attack performance as well as adversary quality.
arXiv Detail & Related papers (2022-01-20T14:16:07Z) - Targeted Attack against Deep Neural Networks via Flipping Limited Weight
Bits [55.740716446995805]
We study a novel attack paradigm, which modifies model parameters in the deployment stage for malicious purposes.
Our goal is to misclassify a specific sample into a target class without any sample modification.
By utilizing the latest technique in integer programming, we equivalently reformulate this BIP problem as a continuous optimization problem.
arXiv Detail & Related papers (2021-02-21T03:13:27Z) - FedBoosting: Federated Learning with Gradient Protected Boosting for
Text Recognition [7.988454173034258]
Federated Learning (FL) framework allows learning a shared model collaboratively without data being centralized or shared among data owners.
We show in this paper that the generalization ability of the joint model is poor on Non-Independent and Non-Identically Distributed (Non-IID) data.
We propose a novel boosting algorithm for FL to address both the generalization and gradient leakage issues.
arXiv Detail & Related papers (2020-07-14T18:47:23Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.