Zero-Knowledge Federated Learning with Lattice-Based Hybrid Encryption for Quantum-Resilient Medical AI
- URL: http://arxiv.org/abs/2603.03398v1
- Date: Tue, 03 Mar 2026 12:43:44 GMT
- Title: Zero-Knowledge Federated Learning with Lattice-Based Hybrid Encryption for Quantum-Resilient Medical AI
- Authors: Edouard Lansiaux,
- Abstract summary: Federated Learning (FL) enables collaborative training of medical AI models across hospitals without centralizing patient data.<n> gradient inversion attacks can reconstruct patient information, Byzantine clients can poison the global model, and the emphHarvest Now, Decrypt Later (HNDL) threat renders today's encrypted traffic vulnerable to future quantum adversaries.<n>We introduce emphZeroKnowledge Federated Learning, Post-Quantum, a three-tiered cryptographic protocol that hybridizes (i) ML-KEM for quantum-resistant key encapsulation, (ii) lattice-based Zero-Knowledge Proofs for verifiable emph
- Score: 0.0
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Federated Learning (FL) enables collaborative training of medical AI models across hospitals without centralizing patient data. However, the exchange of model updates exposes critical vulnerabilities: gradient inversion attacks can reconstruct patient information, Byzantine clients can poison the global model, and the \emph{Harvest Now, Decrypt Later} (HNDL) threat renders today's encrypted traffic vulnerable to future quantum adversaries.We introduce \textbf{ZKFL-PQ} (\emph{Zero-Knowledge Federated Learning, Post-Quantum}), a three-tiered cryptographic protocol that hybridizes (i) ML-KEM (FIPS~203) for quantum-resistant key encapsulation, (ii) lattice-based Zero-Knowledge Proofs for verifiable \emph{norm-constrained} gradient integrity, and (iii) BFV homomorphic encryption for privacy-preserving aggregation. We formalize the security model and prove correctness and zero-knowledge properties under the Module-LWE, Ring-LWE, and SIS assumptions \emph{in the classical random oracle model}. We evaluate ZKFL-PQ on synthetic medical imaging data across 5 federated clients over 10 training rounds. Our protocol achieves \textbf{100\% rejection of norm-violating updates} while maintaining model accuracy at 100\%, compared to a catastrophic drop to 23\% under standard FL. The computational overhead (factor $\sim$20$\times$) is analyzed and shown to be compatible with clinical research workflows operating on daily or weekly training cycles. We emphasize that the current defense guarantees rejection of large-norm malicious updates; robustness against subtle low-norm or directional poisoning remains future work.
Related papers
- Understanding the Resource Cost of Fully Homomorphic Encryption in Quantum Federated Learning [1.1666234644810893]
Homomorphic encryption of parameters has been proposed as a solution in Quantum Federated Learning (QFL)<n>We evaluate the overhead introduced by Fully Homomorphic Encryption (FHE) in QFL setups and assess its feasibility for real-world applications.
arXiv Detail & Related papers (2026-03-03T09:36:55Z) - Privacy-Preserving Federated Vision Transformer Learning Leveraging Lightweight Homomorphic Encryption in Medical AI [5.6285415648839425]
Collaborative machine learning promises improved diagnostic accuracy by leveraging diverse datasets, yet privacy regulations such as HIPAA prohibit direct patient data sharing.<n>This paper presents a privacy-preserving federated learning framework combining Vision Transformers (ViT) with homomorphic encryption (HE) for secure multi-institutional histopathology classification.
arXiv Detail & Related papers (2025-11-26T02:27:40Z) - AdeptHEQ-FL: Adaptive Homomorphic Encryption for Federated Learning of Hybrid Classical-Quantum Models with Dynamic Layer Sparing [0.0]
Federated Learning (FL) faces inherent challenges in balancing model performance, privacy preservation, and communication efficiency.<n>We introduce AdeptHEQ-FL, a unified hybrid classical-quantum FL framework that integrates a CNN-PQC architecture for expressive decentralized learning.<n>We establish formal privacy guarantees, provide convergence analysis, and conduct extensive experiments on the CIFAR-10, SVHN, and Fashion-MNIST datasets.
arXiv Detail & Related papers (2025-07-09T22:29:02Z) - A Selective Homomorphic Encryption Approach for Faster Privacy-Preserving Federated Learning [2.942616054218564]
Federated learning (FL) has come forward as a critical approach for privacy-preserving machine learning in healthcare.<n>Current security implementations for these systems face a fundamental trade-off: rigorous cryptographic protections impose prohibitive computational overhead.<n>We present Fast and Secure Federated Learning, a novel approach that strategically combines selective homomorphic encryption, differential privacy, and bitwise scrambling to achieve robust security.
arXiv Detail & Related papers (2025-01-22T14:37:44Z) - Enhancing Quantum Security over Federated Learning via Post-Quantum Cryptography [38.77135346831741]
Federated learning (FL) has become one of the standard approaches for deploying machine learning models on edge devices.
Current digital signature algorithms can protect these communicated model updates, but they fail to ensure quantum security in the era of large-scale quantum computing.
In this work, we empirically investigate the impact of these three NIST-standardized PQC algorithms for digital signatures within the FL procedure.
arXiv Detail & Related papers (2024-09-06T22:02:08Z) - Fed-CVLC: Compressing Federated Learning Communications with
Variable-Length Codes [54.18186259484828]
In Federated Learning (FL) paradigm, a parameter server (PS) concurrently communicates with distributed participating clients for model collection, update aggregation, and model distribution over multiple rounds.
We show strong evidences that variable-length is beneficial for compression in FL.
We present Fed-CVLC (Federated Learning Compression with Variable-Length Codes), which fine-tunes the code length in response to the dynamics of model updates.
arXiv Detail & Related papers (2024-02-06T07:25:21Z) - FreqFed: A Frequency Analysis-Based Approach for Mitigating Poisoning
Attacks in Federated Learning [98.43475653490219]
Federated learning (FL) is susceptible to poisoning attacks.
FreqFed is a novel aggregation mechanism that transforms the model updates into the frequency domain.
We demonstrate that FreqFed can mitigate poisoning attacks effectively with a negligible impact on the utility of the aggregated model.
arXiv Detail & Related papers (2023-12-07T16:56:24Z) - Client-side Gradient Inversion Against Federated Learning from Poisoning [59.74484221875662]
Federated Learning (FL) enables distributed participants to train a global model without sharing data directly to a central server.
Recent studies have revealed that FL is vulnerable to gradient inversion attack (GIA), which aims to reconstruct the original training samples.
We propose Client-side poisoning Gradient Inversion (CGI), which is a novel attack method that can be launched from clients.
arXiv Detail & Related papers (2023-09-14T03:48:27Z) - Do Gradient Inversion Attacks Make Federated Learning Unsafe? [70.0231254112197]
Federated learning (FL) allows the collaborative training of AI models without needing to share raw data.
Recent works on the inversion of deep neural networks from model gradients raised concerns about the security of FL in preventing the leakage of training data.
In this work, we show that these attacks presented in the literature are impractical in real FL use-cases and provide a new baseline attack.
arXiv Detail & Related papers (2022-02-14T18:33:12Z) - RoFL: Attestable Robustness for Secure Federated Learning [59.63865074749391]
Federated Learning allows a large number of clients to train a joint model without the need to share their private data.
To ensure the confidentiality of the client updates, Federated Learning systems employ secure aggregation.
We present RoFL, a secure Federated Learning system that improves robustness against malicious clients.
arXiv Detail & Related papers (2021-07-07T15:42:49Z) - How Robust are Randomized Smoothing based Defenses to Data Poisoning? [66.80663779176979]
We present a previously unrecognized threat to robust machine learning models that highlights the importance of training-data quality.
We propose a novel bilevel optimization-based data poisoning attack that degrades the robustness guarantees of certifiably robust classifiers.
Our attack is effective even when the victim trains the models from scratch using state-of-the-art robust training methods.
arXiv Detail & Related papers (2020-12-02T15:30:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.