Privacy-Preserving Federated Vision Transformer Learning Leveraging Lightweight Homomorphic Encryption in Medical AI
- URL: http://arxiv.org/abs/2511.20983v1
- Date: Wed, 26 Nov 2025 02:27:40 GMT
- Title: Privacy-Preserving Federated Vision Transformer Learning Leveraging Lightweight Homomorphic Encryption in Medical AI
- Authors: Al Amin, Kamrul Hasan, Liang Hong, Sharif Ullah,
- Abstract summary: Collaborative machine learning promises improved diagnostic accuracy by leveraging diverse datasets, yet privacy regulations such as HIPAA prohibit direct patient data sharing.<n>This paper presents a privacy-preserving federated learning framework combining Vision Transformers (ViT) with homomorphic encryption (HE) for secure multi-institutional histopathology classification.
- Score: 5.6285415648839425
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Collaborative machine learning across healthcare institutions promises improved diagnostic accuracy by leveraging diverse datasets, yet privacy regulations such as HIPAA prohibit direct patient data sharing. While federated learning (FL) enables decentralized training without raw data exchange, recent studies show that model gradients in conventional FL remain vulnerable to reconstruction attacks, potentially exposing sensitive medical information. This paper presents a privacy-preserving federated learning framework combining Vision Transformers (ViT) with homomorphic encryption (HE) for secure multi-institutional histopathology classification. The approach leverages the ViT CLS token as a compact 768-dimensional feature representation for secure aggregation, encrypting these tokens using CKKS homomorphic encryption before transmission to the server. We demonstrate that encrypting CLS tokens achieves a 30-fold communication reduction compared to gradient encryption while maintaining strong privacy guarantees. Through evaluation on a three-client federated setup for lung cancer histopathology classification, we show that gradients are highly susceptible to model inversion attacks (PSNR: 52.26 dB, SSIM: 0.999, NMI: 0.741), enabling near-perfect image reconstruction. In contrast, the proposed CLS-protected HE approach prevents such attacks while enabling encrypted inference directly on ciphertexts, requiring only 326 KB of encrypted data transmission per aggregation round. The framework achieves 96.12 percent global classification accuracy in the unencrypted domain and 90.02 percent in the encrypted domain.
Related papers
- Zero-Knowledge Federated Learning with Lattice-Based Hybrid Encryption for Quantum-Resilient Medical AI [0.0]
Federated Learning (FL) enables collaborative training of medical AI models across hospitals without centralizing patient data.<n> gradient inversion attacks can reconstruct patient information, Byzantine clients can poison the global model, and the emphHarvest Now, Decrypt Later (HNDL) threat renders today's encrypted traffic vulnerable to future quantum adversaries.<n>We introduce emphZeroKnowledge Federated Learning, Post-Quantum, a three-tiered cryptographic protocol that hybridizes (i) ML-KEM for quantum-resistant key encapsulation, (ii) lattice-based Zero-Knowledge Proofs for verifiable emph
arXiv Detail & Related papers (2026-03-03T12:43:44Z) - Vision Token Masking Alone Cannot Prevent PHI Leakage in Medical Document OCR: A Systematic Evaluation [0.0]
Vision-language models (VLMs) are increasingly deployed for optical character recognition (OCR) in healthcare settings.<n>This work presents the first systematic evaluation of inference-time vision token masking as a privacy-preserving mechanism for medical document OCR using DeepSeek-OCR.
arXiv Detail & Related papers (2025-11-23T03:45:22Z) - MedHE: Communication-Efficient Privacy-Preserving Federated Learning with Adaptive Gradient Sparsification for Healthcare [0.0]
This paper presents MedHE, a novel framework combining adaptive gradient sparsification with CKKS homomorphic encryption to enable privacy-preserving collaborative learning on sensitive medical data.<n>Our approach introduces a dynamic threshold mechanism with error compensation for top-k gradient selection, achieving 97.5 percent communication reduction while preserving model utility.
arXiv Detail & Related papers (2025-11-12T06:50:48Z) - A Vision-Language Pre-training Model-Guided Approach for Mitigating Backdoor Attacks in Federated Learning [43.847168319564844]
We propose an FL backdoor defense framework, named CLIP-Fed, that utilizes the zero-shot learning capabilities of vision-language pre-training models.<n>Our scheme overcomes the limitations of Non-IID imposed on defense effectiveness by integrating pre-aggregation and post-aggregation defense strategies.
arXiv Detail & Related papers (2025-08-14T03:39:54Z) - Conformal Prediction for Privacy-Preserving Machine Learning [83.88591755871734]
Using AES-encrypted variants of the MNIST dataset, we demonstrate that Conformal Prediction methods remain effective even when applied directly in the encrypted domain.<n>Our work sets a foundation for principled uncertainty quantification in secure, privacy-aware learning systems.
arXiv Detail & Related papers (2025-07-13T15:29:14Z) - A Selective Homomorphic Encryption Approach for Faster Privacy-Preserving Federated Learning [2.942616054218564]
Federated learning (FL) has come forward as a critical approach for privacy-preserving machine learning in healthcare.<n>Current security implementations for these systems face a fundamental trade-off: rigorous cryptographic protections impose prohibitive computational overhead.<n>We present Fast and Secure Federated Learning, a novel approach that strategically combines selective homomorphic encryption, differential privacy, and bitwise scrambling to achieve robust security.
arXiv Detail & Related papers (2025-01-22T14:37:44Z) - Secure Semantic Communication With Homomorphic Encryption [52.5344514499035]
This paper explores the feasibility of applying homomorphic encryption to SemCom.<n>We propose a task-oriented SemCom scheme secured through homomorphic encryption.
arXiv Detail & Related papers (2025-01-17T13:26:14Z) - A Multiparty Homomorphic Encryption Approach to Confidential Federated Kaplan Meier Survival Analysis [0.0]
We propose a.<n>multiparty homomorphic encryption-based framework for.<n>privacy-preserving federated Kaplan--Meier survival analysis.<n>Our framework ensures encrypted survival estimates closely match centralized outcomes, supported by formal utility-loss bounds.
arXiv Detail & Related papers (2024-12-29T15:17:42Z) - ViT Enhanced Privacy-Preserving Secure Medical Data Sharing and Classification [8.140412831443454]
This research introduces a secure framework consisting of a learnable encryption method based on the block-pixel operation to encrypt the data and subsequently integrate it with the Vision Transformer (ViT)
The proposed framework ensures data privacy and security by creating unique scrambling patterns per key, providing robust performance against leading bit attacks and minimum difference attacks.
arXiv Detail & Related papers (2024-11-08T16:33:20Z) - Perfectly Secure Steganography Using Minimum Entropy Coupling [60.154855689780796]
We show that a steganography procedure is perfectly secure under Cachin 1998's information-theoretic model of steganography.
We also show that, among perfectly secure procedures, a procedure maximizes information throughput if and only if it is induced by a minimum entropy coupling.
arXiv Detail & Related papers (2022-10-24T17:40:07Z) - Is Vertical Logistic Regression Privacy-Preserving? A Comprehensive
Privacy Analysis and Beyond [57.10914865054868]
We consider vertical logistic regression (VLR) trained with mini-batch descent gradient.
We provide a comprehensive and rigorous privacy analysis of VLR in a class of open-source Federated Learning frameworks.
arXiv Detail & Related papers (2022-07-19T05:47:30Z) - Gradient Obfuscation Gives a False Sense of Security in Federated
Learning [41.36621813381792]
We present a new data reconstruction attack framework targeting the image classification task in federated learning.
Contrary to prior studies, we argue that privacy enhancement should not be treated as a byproduct of gradient compression.
arXiv Detail & Related papers (2022-06-08T13:01:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.