Verifiable Split Learning via zk-SNARKs
- URL: http://arxiv.org/abs/2511.01356v1
- Date: Mon, 03 Nov 2025 09:05:07 GMT
- Title: Verifiable Split Learning via zk-SNARKs
- Authors: Rana Alaa, Darío González-Ferreiro, Carlos Beis-Penedo, Manuel Fernández-Veiga, Rebeca P. Díaz-Redondo, Ana Fernández-Vilas,
- Abstract summary: Split learning is an approach to collaborative learning in which a deep neural network is divided into two parts: client-side and server-side.<n>This paper proposes a verifiable split learning framework that integrates a zk-SNARK proof to ensure correctness and verifiability.
- Score: 2.226920120094475
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Split learning is an approach to collaborative learning in which a deep neural network is divided into two parts: client-side and server-side at a cut layer. The client side executes its model using its raw input data and sends the intermediate activation to the server side. This configuration architecture is very useful for enabling collaborative training when data or resources are separated between devices. However, split learning lacks the ability to verify the correctness and honesty of the computations that are performed and exchanged between the parties. To this purpose, this paper proposes a verifiable split learning framework that integrates a zk-SNARK proof to ensure correctness and verifiability. The zk-SNARK proof and verification are generated for both sides in forward propagation and backward propagation on the server side, guaranteeing verifiability on both sides. The verifiable split learning architecture is compared to a blockchain-enabled system for the same deep learning network, one that records updates but without generating the zero-knowledge proof. From the comparison, it can be deduced that applying the zk-SNARK test achieves verifiability and correctness, while blockchains are lightweight but unverifiable.
Related papers
- Proof of Reasoning for Privacy Enhanced Federated Blockchain Learning at the Edge [6.952864017722625]
This paper introduces Proof of Reasoning (PoR), a novel consensus mechanism specifically designed for federated learning using blockchain.<n>Unlike generic blockchain consensus mechanisms commonly found in the literature, PoR integrates three distinct processes tailored for federated learning.<n>PoR scales to large IoT networks with low latency and storage growth, and adapts to evolving data, regulations, and network conditions.
arXiv Detail & Related papers (2026-01-12T01:57:17Z) - CycleSL: Server-Client Cyclical Update Driven Scalable Split Learning [60.59553507555341]
We introduce CycleSL, a novel aggregation-free split learning framework.<n>Inspired by alternating block coordinate descent, CycleSL treats server-side training as an independent higher-level machine learning task.<n>Our empirical findings highlight the effectiveness of CycleSL in enhancing model performance.
arXiv Detail & Related papers (2025-11-23T21:00:21Z) - SafeSplit: A Novel Defense Against Client-Side Backdoor Attacks in Split Learning (Full Version) [53.16528046390881]
Split Learning (SL) is a distributed deep learning approach enabling multiple clients and a server to collaboratively train and infer on a shared deep neural network (DNN)<n>This paper presents SafeSplit, the first defense against client-side backdoor attacks in Split Learning (SL)<n>It uses a two-fold analysis to identify client-induced changes and detect poisoned models.
arXiv Detail & Related papers (2025-01-11T22:20:20Z) - SplitFedZip: Learned Compression for Data Transfer Reduction in Split-Federated Learning [5.437298646956505]
SplitFederated (SplitFed) learning is an ideal learning framework across various domains.<n>SplitFedZip is a novel method that employs learned compression to reduce data transfer in SplitFed learning.
arXiv Detail & Related papers (2024-12-18T19:04:19Z) - Blockchain-enabled Trustworthy Federated Unlearning [50.01101423318312]
Federated unlearning is a promising paradigm for protecting the data ownership of distributed clients.
Existing works require central servers to retain the historical model parameters from distributed clients.
This paper proposes a new blockchain-enabled trustworthy federated unlearning framework.
arXiv Detail & Related papers (2024-01-29T07:04:48Z) - Scalable Collaborative Learning via Representation Sharing [53.047460465980144]
Federated learning (FL) and Split Learning (SL) are two frameworks that enable collaborative learning while keeping the data private (on device)
In FL, each data holder trains a model locally and releases it to a central server for aggregation.
In SL, the clients must release individual cut-layer activations (smashed data) to the server and wait for its response (during both inference and back propagation).
In this work, we present a novel approach for privacy-preserving machine learning, where the clients collaborate via online knowledge distillation using a contrastive loss.
arXiv Detail & Related papers (2022-11-20T10:49:22Z) - UnSplit: Data-Oblivious Model Inversion, Model Stealing, and Label
Inference Attacks Against Split Learning [0.0]
Split learning framework aims to split up the model among the client and the server.
We show that split learning paradigm can pose serious security risks and provide no more than a false sense of security.
arXiv Detail & Related papers (2021-08-20T07:39:16Z) - Secure Distributed Training at Scale [65.7538150168154]
Training in presence of peers requires specialized distributed training algorithms with Byzantine tolerance.
We propose a novel protocol for secure (Byzantine-tolerant) decentralized training that emphasizes communication efficiency.
arXiv Detail & Related papers (2021-06-21T17:00:42Z) - Comparison of Privacy-Preserving Distributed Deep Learning Methods in
Healthcare [0.0]
In this paper, we compare three privacy-preserving distributed learning techniques: federated learning, split learning, and SplitFed.
We use these techniques to develop binary classification models for detecting tuberculosis from chest X-rays.
We propose a novel distributed learning architecture called SplitFedv3, which performs better than split learning and SplitFedv2 in our experiments.
arXiv Detail & Related papers (2020-12-23T10:45:52Z) - Learning to Match Jobs with Resumes from Sparse Interaction Data using
Multi-View Co-Teaching Network [83.64416937454801]
Job-resume interaction data is sparse and noisy, which affects the performance of job-resume match algorithms.
We propose a novel multi-view co-teaching network from sparse interaction data for job-resume matching.
Our model is able to outperform state-of-the-art methods for job-resume matching.
arXiv Detail & Related papers (2020-09-25T03:09:54Z) - Proof of Learning (PoLe): Empowering Machine Learning with Consensus
Building on Blockchains [7.854034211489588]
We propose a new consensus mechanism, Proof of Learning (PoLe), which directs the spent for consensus toward optimization of neural networks (NN)
In our mechanism, the training/testing data are released to the entire blockchain network (BCN) and the consensus nodes train NN models on the data.
We show that PoLe can achieve a more stable block generation rate, which leads to more efficient transaction processing.
arXiv Detail & Related papers (2020-07-29T22:53:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.