Scaling up Trustless DNN Inference with Zero-Knowledge Proofs
- URL: http://arxiv.org/abs/2210.08674v1
- Date: Mon, 17 Oct 2022 00:35:38 GMT
- Title: Scaling up Trustless DNN Inference with Zero-Knowledge Proofs
- Authors: Daniel Kang, Tatsunori Hashimoto, Ion Stoica, Yi Sun
- Abstract summary: We present the first practical ImageNet-scale method to verify ML model inference non-interactively, i.e., after the inference has been done.
We provide the first ZKSNARK proof of valid inference for a full resolution ImageNet model, achieving 79% top-5 accuracy.
- Score: 47.42532753464726
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As ML models have increased in capabilities and accuracy, so has the
complexity of their deployments. Increasingly, ML model consumers are turning
to service providers to serve the ML models in the ML-as-a-service (MLaaS)
paradigm. As MLaaS proliferates, a critical requirement emerges: how can model
consumers verify that the correct predictions were served, in the face of
malicious, lazy, or buggy service providers?
In this work, we present the first practical ImageNet-scale method to verify
ML model inference non-interactively, i.e., after the inference has been done.
To do so, we leverage recent developments in ZK-SNARKs (zero-knowledge succinct
non-interactive argument of knowledge), a form of zero-knowledge proofs.
ZK-SNARKs allows us to verify ML model execution non-interactively and with
only standard cryptographic hardness assumptions. In particular, we provide the
first ZK-SNARK proof of valid inference for a full resolution ImageNet model,
achieving 79\% top-5 accuracy. We further use these ZK-SNARKs to design
protocols to verify ML model execution in a variety of scenarios, including for
verifying MLaaS predictions, verifying MLaaS model accuracy, and using ML
models for trustless retrieval. Together, our results show that ZK-SNARKs have
the promise to make verified ML model inference practical.
Related papers
- A General Framework for Data-Use Auditing of ML Models [47.369572284751285]
We propose a general method to audit an ML model for the use of a data-owner's data in training.
We show the effectiveness of our proposed framework by applying it to audit data use in two types of ML models.
arXiv Detail & Related papers (2024-07-21T09:32:34Z) - MLGuard: Defend Your Machine Learning Model! [3.4069804433026314]
We propose MLGuard, a new approach to specify contracts for Machine Learning applications.
Our work is intended to provide the overarching framework required for building ML applications and monitoring their safety.
arXiv Detail & Related papers (2023-09-04T06:08:11Z) - ezDPS: An Efficient and Zero-Knowledge Machine Learning Inference
Pipeline [2.0813318162800707]
We propose ezDPS, a new efficient and zero-knowledge Machine Learning inference scheme.
ezDPS is a zkML pipeline in which the data is processed in multiple stages for high accuracy.
We show that ezDPS achieves one-to-three orders of magnitude more efficient than the generic circuit-based approach in all metrics.
arXiv Detail & Related papers (2022-12-11T06:47:28Z) - Predicting is not Understanding: Recognizing and Addressing
Underspecification in Machine Learning [47.651130958272155]
Underspecification refers to the existence of multiple models that are indistinguishable in their in-domain accuracy.
We formalize the concept of underspecification and propose a method to identify and partially address it.
arXiv Detail & Related papers (2022-07-06T11:20:40Z) - VisFIS: Visual Feature Importance Supervision with
Right-for-the-Right-Reason Objectives [84.48039784446166]
We show that model FI supervision can meaningfully improve VQA model accuracy as well as performance on several Right-for-the-Right-Reason metrics.
Our best performing method, Visual Feature Importance Supervision (VisFIS), outperforms strong baselines on benchmark VQA datasets.
Predictions are more accurate when explanations are plausible and faithful, and not when they are plausible but not faithful.
arXiv Detail & Related papers (2022-06-22T17:02:01Z) - Reducing Unintended Bias of ML Models on Tabular and Textual Data [5.503546193689538]
We revisit the framework FixOut that is inspired in the approach "fairness through unawareness" to build fairer models.
We introduce several improvements such as automating the choice of FixOut's parameters.
We present several experimental results that illustrate the fact that FixOut improves process fairness on different classification settings.
arXiv Detail & Related papers (2021-08-05T14:55:56Z) - MLDemon: Deployment Monitoring for Machine Learning Systems [10.074466859579571]
We propose a novel approach, MLDemon, for ML DEployment MONitoring.
MLDemon integrates both unlabeled features and a small amount of on-demand labeled examples over time to produce a real-time estimate.
On temporal datasets with diverse distribution drifts and models, MLDemon substantially outperforms existing monitoring approaches.
arXiv Detail & Related papers (2021-04-28T07:59:10Z) - Explaining and Improving Model Behavior with k Nearest Neighbor
Representations [107.24850861390196]
We propose using k nearest neighbor representations to identify training examples responsible for a model's predictions.
We show that kNN representations are effective at uncovering learned spurious associations.
Our results indicate that the kNN approach makes the finetuned model more robust to adversarial inputs.
arXiv Detail & Related papers (2020-10-18T16:55:25Z) - Model Assertions for Monitoring and Improving ML Models [26.90089824436192]
We propose a new abstraction, model assertions, that adapts the classical use of program assertions as a way to monitor and improve ML models.
Model assertions are arbitrary functions over a model's input and output that indicate when errors may be occurring.
We propose methods of using model assertions at all stages of ML system deployment, including runtime monitoring, validating labels, and continuously improving ML models.
arXiv Detail & Related papers (2020-03-03T17:49:49Z) - Meta-Learned Confidence for Few-shot Learning [60.6086305523402]
A popular transductive inference technique for few-shot metric-based approaches, is to update the prototype of each class with the mean of the most confident query examples.
We propose to meta-learn the confidence for each query sample, to assign optimal weights to unlabeled queries.
We validate our few-shot learning model with meta-learned confidence on four benchmark datasets.
arXiv Detail & Related papers (2020-02-27T10:22:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.