SenseCrypt: Sensitivity-guided Selective Homomorphic Encryption for Joint Federated Learning in Cross-Device Scenarios
- URL: http://arxiv.org/abs/2508.04100v1
- Date: Wed, 06 Aug 2025 05:42:41 GMT
- Title: SenseCrypt: Sensitivity-guided Selective Homomorphic Encryption for Joint Federated Learning in Cross-Device Scenarios
- Authors: Borui Li, Li Yan, Junhao Han, Jianmin Liu, Lei Yu,
- Abstract summary: Homomorphic Encryption (HE) prevails in securing Federated Learning (FL), but suffers from high overhead and adaptation cost.<n>We propose SenseCrypt, a Sensitivity-guided selective Homomorphic EnCryption framework, to adaptively balance security and HE overhead per cross-device FL client.<n>Experiments demonstrate that SenseCrypt ensures security against the state-of-the-art inversion attacks, while achieving normal model accuracy as on IID data, and reducing training time by 58.4%-88.7% as compared to traditional HE methods.
- Score: 7.594528129456989
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Homomorphic Encryption (HE) prevails in securing Federated Learning (FL), but suffers from high overhead and adaptation cost. Selective HE methods, which partially encrypt model parameters by a global mask, are expected to protect privacy with reduced overhead and easy adaptation. However, in cross-device scenarios with heterogeneous data and system capabilities, traditional Selective HE methods deteriorate client straggling, and suffer from degraded HE overhead reduction performance. Accordingly, we propose SenseCrypt, a Sensitivity-guided selective Homomorphic EnCryption framework, to adaptively balance security and HE overhead per cross-device FL client. Given the observation that model parameter sensitivity is effective for measuring clients' data distribution similarity, we first design a privacy-preserving method to respectively cluster the clients with similar data distributions. Then, we develop a scoring mechanism to deduce the straggler-free ratio of model parameters that can be encrypted by each client per cluster. Finally, for each client, we formulate and solve a multi-objective model parameter selection optimization problem, which minimizes HE overhead while maximizing model security without causing straggling. Experiments demonstrate that SenseCrypt ensures security against the state-of-the-art inversion attacks, while achieving normal model accuracy as on IID data, and reducing training time by 58.4%-88.7% as compared to traditional HE methods.
Related papers
- SelectiveShield: Lightweight Hybrid Defense Against Gradient Leakage in Federated Learning [4.501710235227319]
Federated Learning (FL) enables collaborative model training on decentralized data but remains vulnerable to gradient leakage attacks.<n>Existing defense mechanisms, such as differential privacy (DP) and homomorphic encryption (HE), often introduce a trade-off between privacy, model utility, and system overhead.<n>We propose SelectiveShield, a lightweight hybrid defense framework that adaptively integrates homomorphic encryption and differential privacy.
arXiv Detail & Related papers (2025-08-06T09:50:39Z) - SHE-LoRA: Selective Homomorphic Encryption for Federated Tuning with Heterogeneous LoRA [12.458373666338082]
SHE-LoRA integrates selective homomorphic encryption and low-rank adaptation.<n>We show that SHE-LoRA maintains performance comparable to non-private baselines.<n>It also significantly reduces communication overhead by 94.901% and encryption overhead by 99.829%, compared to baseline.
arXiv Detail & Related papers (2025-05-27T11:36:18Z) - Secure Generalization through Stochastic Bidirectional Parameter Updates Using Dual-Gradient Mechanism [6.03163048890944]
Federated learning (FL) has gained increasing attention due to privacy-preserving collaborative training on decentralized clients.<n>Recent research has underscored the risk of exposing private data to adversaries, even within FL frameworks.<n>We generate diverse models for each client by using systematic perturbations in model parameters at a fine-grained level.
arXiv Detail & Related papers (2025-04-03T02:06:57Z) - Optimal Transport-Guided Source-Free Adaptation for Face Anti-Spoofing [58.56017169759816]
We introduce a novel method in which the face anti-spoofing model can be adapted by the client itself to a target domain at test time.<n>Specifically, we develop a prototype-based base model and an optimal transport-guided adaptor.<n>In cross-domain and cross-attack settings, compared with recent methods, our method achieves average relative improvements of 19.17% in HTER and 8.58% in AUC.
arXiv Detail & Related papers (2025-03-29T06:10:34Z) - A Selective Homomorphic Encryption Approach for Faster Privacy-Preserving Federated Learning [2.942616054218564]
Federated learning (FL) has come forward as a critical approach for privacy-preserving machine learning in healthcare.<n>Current security implementations for these systems face a fundamental trade-off: rigorous cryptographic protections impose prohibitive computational overhead.<n>We present Fast and Secure Federated Learning, a novel approach that strategically combines selective homomorphic encryption, differential privacy, and bitwise scrambling to achieve robust security.
arXiv Detail & Related papers (2025-01-22T14:37:44Z) - PeFAD: A Parameter-Efficient Federated Framework for Time Series Anomaly Detection [51.20479454379662]
We propose a.
Federated Anomaly Detection framework named PeFAD with the increasing privacy concerns.
We conduct extensive evaluations on four real datasets, where PeFAD outperforms existing state-of-the-art baselines by up to 28.74%.
arXiv Detail & Related papers (2024-06-04T13:51:08Z) - FedAA: A Reinforcement Learning Perspective on Adaptive Aggregation for Fair and Robust Federated Learning [5.622065847054885]
Federated Learning (FL) has emerged as a promising approach for privacy-preserving model training across decentralized devices.<n>We introduce a novel method called textbfFedAA, which optimize client contributions via textbfAdaptive textbfAggregation to enhance model robustness against malicious clients.
arXiv Detail & Related papers (2024-02-08T10:22:12Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - G$^2$uardFL: Safeguarding Federated Learning Against Backdoor Attacks
through Attributed Client Graph Clustering [116.4277292854053]
Federated Learning (FL) offers collaborative model training without data sharing.
FL is vulnerable to backdoor attacks, where poisoned model weights lead to compromised system integrity.
We present G$2$uardFL, a protective framework that reinterprets the identification of malicious clients as an attributed graph clustering problem.
arXiv Detail & Related papers (2023-06-08T07:15:04Z) - Theoretically Principled Federated Learning for Balancing Privacy and
Utility [61.03993520243198]
We propose a general learning framework for the protection mechanisms that protects privacy via distorting model parameters.
It can achieve personalized utility-privacy trade-off for each model parameter, on each client, at each communication round in federated learning.
arXiv Detail & Related papers (2023-05-24T13:44:02Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - FedCC: Robust Federated Learning against Model Poisoning Attacks [0.0]
Federated learning is a distributed framework designed to address privacy concerns.<n>It introduces new attack surfaces, which are especially prone when data is non-Independently and Identically Distributed.<n>We present FedCC, a simple yet effective novel defense algorithm against model poisoning attacks.
arXiv Detail & Related papers (2022-12-05T01:52:32Z) - FLIP: A Provable Defense Framework for Backdoor Mitigation in Federated
Learning [66.56240101249803]
We study how hardening benign clients can affect the global model (and the malicious clients)
We propose a trigger reverse engineering based defense and show that our method can achieve improvement with guarantee robustness.
Our results on eight competing SOTA defense methods show the empirical superiority of our method on both single-shot and continuous FL backdoor attacks.
arXiv Detail & Related papers (2022-10-23T22:24:03Z) - CrowdGuard: Federated Backdoor Detection in Federated Learning [39.58317527488534]
This paper presents a novel defense mechanism, CrowdGuard, that effectively mitigates backdoor attacks in Federated Learning.
CrowdGuard employs a server-located stacked clustering scheme to enhance its resilience to rogue client feedback.
The evaluation results demonstrate that CrowdGuard achieves a 100% True-Positive-Rate and True-Negative-Rate across various scenarios.
arXiv Detail & Related papers (2022-10-14T11:27:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.