zkDL: Efficient Zero-Knowledge Proofs of Deep Learning Training
- URL: http://arxiv.org/abs/2307.16273v2
- Date: Tue, 5 Dec 2023 19:42:53 GMT
- Title: zkDL: Efficient Zero-Knowledge Proofs of Deep Learning Training
- Authors: Haochen Sun, Tonghe Bai, Jason Li, Hongyang Zhang
- Abstract summary: ZkDL is an efficient zero-knowledge proof for deep learning training.
zkReLU is a specialized proof for the ReLU activation and its backpropagation.
FAC4DNN is our specialized arithmetic circuit design modelling neural networks.
- Score: 6.993329554241878
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The recent advancements in deep learning have brought about significant
changes in various aspects of people's lives. Meanwhile, these rapid
developments have raised concerns about the legitimacy of the training process
of deep neural networks. To protect the intellectual properties of AI
developers, directly examining the training process by accessing the model
parameters and training data is often prohibited for verifiers.
In response to this challenge, we present zero-knowledge deep learning
(zkDL), an efficient zero-knowledge proof for deep learning training. To
address the long-standing challenge of verifiable computations of
non-linearities in deep learning training, we introduce zkReLU, a specialized
proof for the ReLU activation and its backpropagation. zkReLU turns the
disadvantage of non-arithmetic relations into an advantage, leading to the
creation of FAC4DNN, our specialized arithmetic circuit design for modelling
neural networks. This design aggregates the proofs over different layers and
training steps, without being constrained by their sequential order in the
training process.
With our new CUDA implementation that achieves full compatibility with the
tensor structures and the aggregated proof design, zkDL enables the generation
of complete and sound proofs in less than a second per batch update for an
8-layer neural network with 10M parameters and a batch size of 64, while
provably ensuring the privacy of data and model parameters. To our best
knowledge, we are not aware of any existing work on zero-knowledge proof of
deep learning training that is scalable to million-size networks.
Related papers
- DeepOSets: Non-Autoregressive In-Context Learning of Supervised Learning Operators [11.913853433712855]
In-context operator learning allows a trained machine learning model to learn from a user prompt without further training.
DeepOSets adds in-context learning capabilities to Deep Operator Networks (DeepONets) by combining it with the DeepSets architecture.
As the first non-autoregressive model for in-context operator learning, DeepOSets allow the user prompt to be processed in parallel.
arXiv Detail & Related papers (2024-10-11T23:07:19Z) - Towards Certified Unlearning for Deep Neural Networks [50.816473152067104]
certified unlearning has been extensively studied in convex machine learning models.
We propose several techniques to bridge the gap between certified unlearning and deep neural networks (DNNs)
arXiv Detail & Related papers (2024-08-01T21:22:10Z) - SPIDE: A Purely Spike-based Method for Training Feedback Spiking Neural
Networks [56.35403810762512]
Spiking neural networks (SNNs) with event-based computation are promising brain-inspired models for energy-efficient applications on neuromorphic hardware.
We study spike-based implicit differentiation on the equilibrium state (SPIDE) that extends the recently proposed training method.
arXiv Detail & Related papers (2023-02-01T04:22:59Z) - Quantization-aware Interval Bound Propagation for Training Certifiably
Robust Quantized Neural Networks [58.195261590442406]
We study the problem of training and certifying adversarially robust quantized neural networks (QNNs)
Recent work has shown that floating-point neural networks that have been verified to be robust can become vulnerable to adversarial attacks after quantization.
We present quantization-aware interval bound propagation (QA-IBP), a novel method for training robust QNNs.
arXiv Detail & Related papers (2022-11-29T13:32:38Z) - Intelligence Processing Units Accelerate Neuromorphic Learning [52.952192990802345]
Spiking neural networks (SNNs) have achieved orders of magnitude improvement in terms of energy consumption and latency.
We present an IPU-optimized release of our custom SNN Python package, snnTorch.
arXiv Detail & Related papers (2022-11-19T15:44:08Z) - Training Spiking Neural Networks with Local Tandem Learning [96.32026780517097]
Spiking neural networks (SNNs) are shown to be more biologically plausible and energy efficient than their predecessors.
In this paper, we put forward a generalized learning rule, termed Local Tandem Learning (LTL)
We demonstrate rapid network convergence within five training epochs on the CIFAR-10 dataset while having low computational complexity.
arXiv Detail & Related papers (2022-10-10T10:05:00Z) - Testing Feedforward Neural Networks Training Programs [13.249453757295083]
Multiple testing techniques are proposed to generate test cases that can expose inconsistencies in the behavior of Deep Neural Networks.
These techniques assume implicitly that the training program is bug-free and appropriately configured.
We propose TheDeepChecker, an end-to-end property-based debug approach for DNN training programs.
arXiv Detail & Related papers (2022-04-01T20:49:14Z) - Improved architectures and training algorithms for deep operator
networks [0.0]
Operator learning techniques have emerged as a powerful tool for learning maps between infinite-dimensional Banach spaces.
We analyze the training dynamics of deep operator networks (DeepONets) through the lens of Neural Tangent Kernel (NTK) theory.
arXiv Detail & Related papers (2021-10-04T18:34:41Z) - Improving the Accuracy of Early Exits in Multi-Exit Architectures via
Curriculum Learning [88.17413955380262]
Multi-exit architectures allow deep neural networks to terminate their execution early in order to adhere to tight deadlines at the cost of accuracy.
We introduce a novel method called Multi-Exit Curriculum Learning that utilizes curriculum learning.
Our method consistently improves the accuracy of early exits compared to the standard training approach.
arXiv Detail & Related papers (2021-04-21T11:12:35Z) - Training Deep Neural Networks with Constrained Learning Parameters [4.917317902787792]
A significant portion of deep learning tasks would run on edge computing systems.
We propose the Combinatorial Neural Network Training Algorithm (CoNNTrA)
CoNNTrA trains deep learning models with ternary learning parameters on the MNIST, Iris and ImageNet data sets.
Our results indicate that CoNNTrA models use 32x less memory and have errors at par with the Backpropagation models.
arXiv Detail & Related papers (2020-09-01T16:20:11Z) - Deep Transfer Learning with Ridge Regression [7.843067454030999]
Deep models trained with massive amounts of data demonstrate promising generalisation ability on unseen data from relevant domains.
We address this issue by leveraging the low-rank property of learnt feature vectors produced from deep neural networks (DNNs) with the closed-form solution provided in kernel ridge regression (KRR)
Our method is successful on supervised and semi-supervised transfer learning tasks.
arXiv Detail & Related papers (2020-06-11T20:21:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.