Privacy-Preserving Federated Learning from Partial Decryption Verifiable Threshold Multi-Client Functional Encryption
- URL: http://arxiv.org/abs/2511.12936v1
- Date: Mon, 17 Nov 2025 03:44:47 GMT
- Title: Privacy-Preserving Federated Learning from Partial Decryption Verifiable Threshold Multi-Client Functional Encryption
- Authors: Minjie Wang, Jinguang Han, Weizhi Meng,
- Abstract summary: In Federated learning, multiple parties can cooperate to train the model without directly exchanging their own private data.<n>We construct a partial decryption verifiable threshold multi client function encryption scheme.<n>VTSAFL empowers clients to verify aggregation results, concurrently minimizing both computational and communication overhead.
- Score: 8.905928020204232
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: In federated learning, multiple parties can cooperate to train the model without directly exchanging their own private data, but the gradient leakage problem still threatens the privacy security and model integrity. Although the existing scheme uses threshold cryptography to mitigate the inference attack, it can not guarantee the verifiability of the aggregation results, making the system vulnerable to the threat of poisoning attack. We construct a partial decryption verifiable threshold multi client function encryption scheme, and apply it to Federated learning to implement the federated learning verifiable threshold security aggregation protocol (VTSAFL). VTSAFL empowers clients to verify aggregation results, concurrently minimizing both computational and communication overhead. The size of the functional key and partial decryption results of the scheme are constant, which provides efficiency guarantee for large-scale deployment. The experimental results on MNIST dataset show that vtsafl can achieve the same accuracy as the existing scheme, while reducing the total training time by more than 40%, and reducing the communication overhead by up to 50%. This efficiency is critical for overcoming the resource constraints inherent in Internet of Things (IoT) devices.
Related papers
- Privacy-Preserving Federated Learning with Verifiable Fairness Guarantees [0.0]
Federated learning enables collaborative model training across distributed institutions without centralizing sensitive data.<n>This paper introduces CryptoFair-FL, a novel cryptographic framework providing the first verifiable fairness guarantees for federated learning systems.
arXiv Detail & Related papers (2026-01-18T15:06:30Z) - Secure, Verifiable, and Scalable Multi-Client Data Sharing via Consensus-Based Privacy-Preserving Data Distribution [0.0]
CPPDD is an autonomous protocol for secure multi-client data aggregation.<n>It enforces unanimous-release confidentiality through a dual-layer protection mechanism.<n>It achieves 100% malicious deviation detection, exact data recovery, and three-to-four orders of magnitude lower FLOPs compared to MPC and HE baselines.
arXiv Detail & Related papers (2026-01-01T18:12:50Z) - PRISM: Privacy-preserving Inference System with Homomorphic Encryption and Modular Activation [0.8197459420866039]
Homomor- phic encryption (HE) offers a solution by enabling computations on encrypted data.<n>HE remains incompatible with machine learning models like convolutional neural networks (CNNs) due to their reliance on non-linear activation functions.<n>This work proposes an optimized framework that replaces standard non-linear functions with homomorphically compatible approximations.
arXiv Detail & Related papers (2025-11-11T03:57:12Z) - Information-Theoretic Decentralized Secure Aggregation with Collusion Resilience [95.33295072401832]
We study the problem of decentralized secure aggregation (DSA) from an information-theoretic perspective.<n>We characterize the optimal rate region, which specifies the minimum achievable communication and secret key rates for DSA.<n>Our results establish the fundamental performance limits of DSA, providing insights for the design of provably secure and communication-efficient protocols.
arXiv Detail & Related papers (2025-08-01T12:51:37Z) - Conformal Prediction for Privacy-Preserving Machine Learning [83.88591755871734]
Using AES-encrypted variants of the MNIST dataset, we demonstrate that Conformal Prediction methods remain effective even when applied directly in the encrypted domain.<n>Our work sets a foundation for principled uncertainty quantification in secure, privacy-aware learning systems.
arXiv Detail & Related papers (2025-07-13T15:29:14Z) - Privacy-Preserving Federated Learning via Homomorphic Adversarial Networks [19.876110109857635]
Homomorphic Adversarial Networks (HANs) are robust against privacy attacks.<n>HANs increase encryption aggregation speed by 6,075 times while incurring a 29.2 times increase in communication overhead.<n>Compared to traditional MK-HE schemes, HANs increase encryption aggregation speed by 6,075 times while incurring a 29.2 times increase in communication overhead.
arXiv Detail & Related papers (2024-12-02T15:59:35Z) - EncCluster: Scalable Functional Encryption in Federated Learning through Weight Clustering and Probabilistic Filters [3.9660142560142067]
Federated Learning (FL) enables model training across decentralized devices by communicating solely local model updates to an aggregation server.
FL remains vulnerable to inference attacks during model update transmissions.
We present EncCluster, a novel method that integrates model compression through weight clustering with recent decentralized FE and privacy-enhancing data encoding.
arXiv Detail & Related papers (2024-06-13T14:16:50Z) - Enabling Privacy-preserving Model Evaluation in Federated Learning via Fully Homomorphic Encryption [1.9662978733004604]
Federated learning has become increasingly widespread due to its ability to train models collaboratively without centralizing sensitive data.<n>The evaluation phase presents significant privacy risks that have not been adequately addressed in the literature.<n>We propose a novel evaluation method that leverages fully homomorphic encryption.
arXiv Detail & Related papers (2024-03-21T14:36:55Z) - FedDBL: Communication and Data Efficient Federated Deep-Broad Learning
for Histopathological Tissue Classification [65.7405397206767]
We propose Federated Deep-Broad Learning (FedDBL) to achieve superior classification performance with limited training samples and only one-round communication.
FedDBL greatly outperforms the competitors with only one-round communication and limited training samples, while it even achieves comparable performance with the ones under multiple-round communications.
Since no data or deep model sharing across different clients, the privacy issue is well-solved and the model security is guaranteed with no model inversion attack risk.
arXiv Detail & Related papers (2023-02-24T14:27:41Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Differentially Private Federated Learning with Laplacian Smoothing [72.85272874099644]
Federated learning aims to protect data privacy by collaboratively learning a model without sharing private data among users.
An adversary may still be able to infer the private training data by attacking the released model.
Differential privacy provides a statistical protection against such attacks at the price of significantly degrading the accuracy or utility of the trained models.
arXiv Detail & Related papers (2020-05-01T04:28:38Z) - CryptoSPN: Privacy-preserving Sum-Product Network Inference [84.88362774693914]
We present a framework for privacy-preserving inference of sum-product networks (SPNs)
CryptoSPN achieves highly efficient and accurate inference in the order of seconds for medium-sized SPNs.
arXiv Detail & Related papers (2020-02-03T14:49:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.