A Survey of Zero-Knowledge Proof Based Verifiable Machine Learning
- URL: http://arxiv.org/abs/2502.18535v1
- Date: Tue, 25 Feb 2025 05:04:27 GMT
- Title: A Survey of Zero-Knowledge Proof Based Verifiable Machine Learning
- Authors: Zhizhi Peng, Taotao Wang, Chonghe Zhao, Guofu Liao, Zibin Lin, Yifeng Liu, Bin Cao, Long Shi, Qing Yang, Shengli Zhang,
- Abstract summary: Zero-knowledge proof (ZKP) technology enables effective validation of model performance and authenticity in both training and inference processes without disclosing sensitive data.<n>ZKP ensures the verifiability and security of machine learning models, making it a valuable tool for privacy-preserving AI.<n>This survey paper aims to bridge the gap by reviewing and analyzing all the existing Zero-Knowledge Machine Learning (ZKML) research from June 2017 to December 2024.
- Score: 11.935644882980233
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: As machine learning technologies advance rapidly across various domains, concerns over data privacy and model security have grown significantly. These challenges are particularly pronounced when models are trained and deployed on cloud platforms or third-party servers due to the computational resource limitations of users' end devices. In response, zero-knowledge proof (ZKP) technology has emerged as a promising solution, enabling effective validation of model performance and authenticity in both training and inference processes without disclosing sensitive data. Thus, ZKP ensures the verifiability and security of machine learning models, making it a valuable tool for privacy-preserving AI. Although some research has explored the verifiable machine learning solutions that exploit ZKP, a comprehensive survey and summary of these efforts remain absent. This survey paper aims to bridge this gap by reviewing and analyzing all the existing Zero-Knowledge Machine Learning (ZKML) research from June 2017 to December 2024. We begin by introducing the concept of ZKML and outlining its ZKP algorithmic setups under three key categories: verifiable training, verifiable inference, and verifiable testing. Next, we provide a comprehensive categorization of existing ZKML research within these categories and analyze the works in detail. Furthermore, we explore the implementation challenges faced in this field and discuss the improvement works to address these obstacles. Additionally, we highlight several commercial applications of ZKML technology. Finally, we propose promising directions for future advancements in this domain.
Related papers
- Verification of Machine Unlearning is Fragile [48.71651033308842]
We introduce two novel adversarial unlearning processes capable of circumventing both types of verification strategies.
This study highlights the vulnerabilities and limitations in machine unlearning verification, paving the way for further research into the safety of machine unlearning.
arXiv Detail & Related papers (2024-08-01T21:37:10Z) - Learn while Unlearn: An Iterative Unlearning Framework for Generative Language Models [52.03511469562013]
We introduce the Iterative Contrastive Unlearning (ICU) framework, which consists of three core components.<n>A Knowledge Unlearning Induction module targets specific knowledge for removal using an unlearning loss.<n>A Contrastive Learning Enhancement module preserves the model's expressive capabilities against the pure unlearning goal.<n>An Iterative Unlearning Refinement module dynamically adjusts the unlearning process through ongoing evaluation and updates.
arXiv Detail & Related papers (2024-07-25T07:09:35Z) - Vulnerability Detection in Smart Contracts: A Comprehensive Survey [10.076412566428756]
This study examines the potential of machine learning techniques to improve the detection and mitigation of vulnerabilities in smart contracts.
We analysed 88 articles published between 2018 and 2023 from the following databases: IEEE, ACM, ScienceDirect, Scopus, and Google Scholar.
The findings reveal that classical machine learning techniques, including KNN, RF, DT, XG-Boost, and SVM, outperform static tools in vulnerability detection.
arXiv Detail & Related papers (2024-07-08T11:51:15Z) - Machine Unlearning for Traditional Models and Large Language Models: A Short Survey [11.539080008361662]
Machine unlearning aims to delete data and reduce its impact on models according to user requests.
This paper categorizes and investigates unlearning on both traditional models and Large Language Models (LLMs)
arXiv Detail & Related papers (2024-04-01T16:08:18Z) - The Frontier of Data Erasure: Machine Unlearning for Large Language Models [56.26002631481726]
Large Language Models (LLMs) are foundational to AI advancements.
LLMs pose risks by potentially memorizing and disseminating sensitive, biased, or copyrighted information.
Machine unlearning emerges as a cutting-edge solution to mitigate these concerns.
arXiv Detail & Related papers (2024-03-23T09:26:15Z) - Exploring Federated Unlearning: Analysis, Comparison, and Insights [101.64910079905566]
federated unlearning enables the selective removal of data from models trained in federated systems.<n>This paper examines existing federated unlearning approaches, examining their algorithmic efficiency, impact on model accuracy, and effectiveness in preserving privacy.<n>We propose the OpenFederatedUnlearning framework, a unified benchmark for evaluating federated unlearning methods.
arXiv Detail & Related papers (2023-10-30T01:34:33Z) - Zero-knowledge Proof Meets Machine Learning in Verifiability: A Survey [19.70499936572449]
High-quality models rely not only on efficient optimization algorithms but also on the training and learning processes built upon vast amounts of data and computational power.
Due to various challenges such as limited computational resources and data privacy concerns, users in need of models often cannot train machine learning models locally.
This paper presents a comprehensive survey of zero-knowledge proof-based verifiable machine learning (ZKP-VML) technology.
arXiv Detail & Related papers (2023-10-23T12:15:23Z) - Machine Unlearning: Solutions and Challenges [21.141664917477257]
Machine learning models may inadvertently memorize sensitive, unauthorized, or malicious data, posing risks of privacy breaches, security vulnerabilities, and performance degradation.
To address these issues, machine unlearning has emerged as a critical technique to selectively remove specific training data points' influence on trained models.
This paper provides a comprehensive taxonomy and analysis of the solutions in machine unlearning.
arXiv Detail & Related papers (2023-08-14T10:45:51Z) - A Survey on Automated Software Vulnerability Detection Using Machine
Learning and Deep Learning [19.163031235081565]
Machine Learning (ML) and Deep Learning (DL) based models for detecting vulnerabilities in source code have been presented in recent years.
It may be difficult to discover gaps in existing research and potential for future improvement without a comprehensive survey.
This work address that gap by presenting a systematic survey to characterize various features of ML/DL-based source code level software vulnerability detection approaches.
arXiv Detail & Related papers (2023-06-20T16:51:59Z) - Machine Unlearning: A Survey [56.79152190680552]
A special need has arisen where, due to privacy, usability, and/or the right to be forgotten, information about some specific samples needs to be removed from a model, called machine unlearning.
This emerging technology has drawn significant interest from both academics and industry due to its innovation and practicality.
No study has analyzed this complex topic or compared the feasibility of existing unlearning solutions in different kinds of scenarios.
The survey concludes by highlighting some of the outstanding issues with unlearning techniques, along with some feasible directions for new research opportunities.
arXiv Detail & Related papers (2023-06-06T10:18:36Z) - A Survey of Machine Unlearning [56.017968863854186]
Recent regulations now require that, on request, private information about a user must be removed from computer systems.
ML models often remember' the old data.
Recent works on machine unlearning have not been able to completely solve the problem.
arXiv Detail & Related papers (2022-09-06T08:51:53Z) - A review of Federated Learning in Intrusion Detection Systems for IoT [0.15469452301122172]
Intrusion detection systems are evolving into intelligent systems that perform data analysis searching for anomalies in their environment.
Deep learning technologies opened the door to build more complex and effective threat detection models.
Current approaches rely on powerful centralized servers that receive data from all their parties.
This paper focuses on the application of Federated Learning approaches in the field of Intrusion Detection.
arXiv Detail & Related papers (2022-04-26T17:00:07Z) - A Survey on Large-scale Machine Learning [67.6997613600942]
Machine learning can provide deep insights into data, allowing machines to make high-quality predictions.
Most sophisticated machine learning approaches suffer from huge time costs when operating on large-scale data.
Large-scale Machine Learning aims to learn patterns from big data with comparable performance efficiently.
arXiv Detail & Related papers (2020-08-10T06:07:52Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.