zkFL-Health: Blockchain-Enabled Zero-Knowledge Federated Learning for Medical AI Privacy
- URL: http://arxiv.org/abs/2512.21048v1
- Date: Wed, 24 Dec 2025 08:29:28 GMT
- Title: zkFL-Health: Blockchain-Enabled Zero-Knowledge Federated Learning for Medical AI Privacy
- Authors: Savvy Sharma, George Petrovic, Sarthak Kaushik,
- Abstract summary: zkFL-Health is an architecture that combines Federated Learning (FL) with zero-knowledge proofs (ZKPs) and Trusted Execution Environments (TEEs)<n>Clients locally train and commit their updates; the aggregator operates within a TEE to compute the global update and produces a succinct ZK proof that it used exactly the committed inputs and the correct aggregation rule, without revealing any client update to the host.<n>We outline system and threat models tailored to healthcare, the zkFL-Health protocol, security/privacy guarantees, and a performance evaluation plan spanning accuracy, privacy risk, latency, and cost.
- Score: 0.0
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Healthcare AI needs large, diverse datasets, yet strict privacy and governance constraints prevent raw data sharing across institutions. Federated learning (FL) mitigates this by training where data reside and exchanging only model updates, but practical deployments still face two core risks: (1) privacy leakage via gradients or updates (membership inference, gradient inversion) and (2) trust in the aggregator, a single point of failure that can drop, alter, or inject contributions undetected. We present zkFL-Health, an architecture that combines FL with zero-knowledge proofs (ZKPs) and Trusted Execution Environments (TEEs) to deliver privacy-preserving, verifiably correct collaborative training for medical AI. Clients locally train and commit their updates; the aggregator operates within a TEE to compute the global update and produces a succinct ZK proof (via Halo2/Nova) that it used exactly the committed inputs and the correct aggregation rule, without revealing any client update to the host. Verifier nodes validate the proof and record cryptographic commitments on-chain, providing an immutable audit trail and removing the need to trust any single party. We outline system and threat models tailored to healthcare, the zkFL-Health protocol, security/privacy guarantees, and a performance evaluation plan spanning accuracy, privacy risk, latency, and cost. This framework enables multi-institutional medical AI with strong confidentiality, integrity, and auditability, key properties for clinical adoption and regulatory compliance.
Related papers
- Trustworthy Blockchain-based Federated Learning for Electronic Health Records: Securing Participant Identity with Decentralized Identifiers and Verifiable Credentials [0.06372261626436676]
This paper proposes a Trustworthy-based Federated Learning (TBFL) framework integrating Self-Sovereign Identity (SSI) standards.<n>Our results show the framework successfully neutralizes 100% of Sybil attacks, robust predictive performance, and introduces negligible computational overhead.<n>The approach provides a secure, scalable, and economically viable ecosystem for inter-institutional health data collaboration.
arXiv Detail & Related papers (2026-02-02T17:45:58Z) - Blockchain-Enabled Explainable AI for Trusted Healthcare Systems [0.0]
This paper introduces a-Integrated Explainable AI Framework (BXHF) for healthcare systems.<n>We tackle two challenges confronting health information networks: safe data exchange and comprehensible AI-driven clinical decision-making.<n>Our architecture incorporates blockchain, ensuring patient records are immutable, auditable, and tamper-proof.
arXiv Detail & Related papers (2025-09-18T14:17:19Z) - Blockchain-Enabled Privacy-Preserving Second-Order Federated Edge Learning in Personalized Healthcare [1.859970493489417]
Federated learning (FL) has attracted increasing attention to security and privacy challenges in traditional cloud-centric machine learning models.<n>First-order FL approaches face several challenges in personalized model training due to heterogeneous non-independent and identically distributed (non-iid) data.<n>Recently, second-order FL approaches maintain the stability and consistency of non-iid datasets while improving personalized model training.
arXiv Detail & Related papers (2025-05-31T06:41:04Z) - Communication-Efficient and Privacy-Adaptable Mechanism for Federated Learning [54.20871516148981]
We introduce the Communication-Efficient and Privacy-Adaptable Mechanism (CEPAM)<n>CEPAM achieves communication efficiency and privacy protection simultaneously.<n>We theoretically analyze the privacy guarantee of CEPAM and investigate the trade-offs among user privacy and accuracy of CEPAM.
arXiv Detail & Related papers (2025-01-21T11:16:05Z) - A Prototype Model of Zero-Trust Architecture Blockchain with EigenTrust-Based Practical Byzantine Fault Tolerance Protocol to Manage Decentralized Clinical Trials [5.565144088361576]
This paper proposes a prototype model of the Zero-Trust Architecture (z-TAB) to integrate patient-generated clinical trial data during DCT operation management.
The Internet of Things (IoT) has been integrated to streamline data processing among stakeholders within the blockchain platforms.
arXiv Detail & Related papers (2024-08-29T20:18:00Z) - PriRoAgg: Achieving Robust Model Aggregation with Minimum Privacy Leakage for Federated Learning [49.916365792036636]
Federated learning (FL) has recently gained significant momentum due to its potential to leverage large-scale distributed user data.<n>The transmitted model updates can potentially leak sensitive user information, and the lack of central control of the local training process leaves the global model susceptible to malicious manipulations on model updates.<n>We develop a general framework PriRoAgg, utilizing Lagrange coded computing and distributed zero-knowledge proof, to execute a wide range of robust aggregation algorithms while satisfying aggregated privacy.
arXiv Detail & Related papers (2024-07-12T03:18:08Z) - Secure Aggregation is Not Private Against Membership Inference Attacks [66.59892736942953]
We investigate the privacy implications of SecAgg in federated learning.
We show that SecAgg offers weak privacy against membership inference attacks even in a single training round.
Our findings underscore the imperative for additional privacy-enhancing mechanisms, such as noise injection.
arXiv Detail & Related papers (2024-03-26T15:07:58Z) - Blockchain-empowered Federated Learning for Healthcare Metaverses:
User-centric Incentive Mechanism with Optimal Data Freshness [66.3982155172418]
We first design a user-centric privacy-preserving framework based on decentralized Federated Learning (FL) for healthcare metaverses.
We then utilize Age of Information (AoI) as an effective data-freshness metric and propose an AoI-based contract theory model under Prospect Theory (PT) to motivate sensing data sharing.
arXiv Detail & Related papers (2023-07-29T12:54:03Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - Unraveling the Connections between Privacy and Certified Robustness in
Federated Learning Against Poisoning Attacks [68.20436971825941]
Federated learning (FL) provides an efficient paradigm to jointly train a global model leveraging data from distributed users.
Several studies have shown that FL is vulnerable to poisoning attacks.
To protect the privacy of local users, FL is usually trained in a differentially private way.
arXiv Detail & Related papers (2022-09-08T21:01:42Z) - BEAS: Blockchain Enabled Asynchronous & Secure Federated Machine
Learning [0.0]
We present BEAS, the first blockchain-based framework for N-party Federated Learning.
It provides strict privacy guarantees of training data using gradient pruning.
Anomaly detection protocols are used to minimize the risk of data-poisoning attacks.
We also define a novel protocol to prevent premature convergence in heterogeneous learning environments.
arXiv Detail & Related papers (2022-02-06T17:11:14Z) - A Privacy-Preserving and Trustable Multi-agent Learning Framework [34.28936739262812]
This paper presents Privacy-preserving and trustable Distributed Learning (PT-DL)
PT-DL is a fully decentralized framework that relies on Differential Privacy to guarantee strong privacy protections of the agents' data.
The paper shows that PT-DL is resilient up to a 50% collusion attack, with high probability, in a malicious trust model.
arXiv Detail & Related papers (2021-06-02T15:46:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.