Enhancing Trust in AI Marketplaces: Evaluating On-Chain Verification of Personalized AI models using zk-SNARKs
- URL: http://arxiv.org/abs/2504.04794v1
- Date: Mon, 07 Apr 2025 07:38:29 GMT
- Title: Enhancing Trust in AI Marketplaces: Evaluating On-Chain Verification of Personalized AI models using zk-SNARKs
- Authors: Nishant Jagannath, Christopher Wong, Braden Mcgrath, Md Farhad Hossain, Asuquo A. Okon, Abbas Jamalipour, Kumudu S. Munasinghe,
- Abstract summary: This paper addresses the challenge of verifying personalized AI models in decentralized environments.<n>We propose a novel framework that integrates zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARKs) with Chainlink decentralized oracles.<n>Our results indicate the framework's efficacy, with key metrics including proof generation taking an average of 233.63 seconds and verification time of 61.50 seconds.
- Score: 8.458944388986067
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: The rapid advancement of artificial intelligence (AI) has brought about sophisticated models capable of various tasks ranging from image recognition to natural language processing. As these models continue to grow in complexity, ensuring their trustworthiness and transparency becomes critical, particularly in decentralized environments where traditional trust mechanisms are absent. This paper addresses the challenge of verifying personalized AI models in such environments, focusing on their integrity and privacy. We propose a novel framework that integrates zero-knowledge succinct non-interactive arguments of knowledge (zk-SNARKs) with Chainlink decentralized oracles to verify AI model performance claims on blockchain platforms. Our key contribution lies in integrating zk-SNARKs with Chainlink oracles to securely fetch and verify external data to enable trustless verification of AI models on a blockchain. Our approach addresses the limitations of using unverified external data for AI verification on the blockchain while preserving sensitive information of AI models and enhancing transparency. We demonstrate our methodology with a linear regression model predicting Bitcoin prices using on-chain data verified on the Sepolia testnet. Our results indicate the framework's efficacy, with key metrics including proof generation taking an average of 233.63 seconds and verification time of 61.50 seconds. This research paves the way for transparent and trustless verification processes in blockchain-enabled AI ecosystems, addressing key challenges such as model integrity and model privacy protection. The proposed framework, while exemplified with linear regression, is designed for broader applicability across more complex AI models, setting the stage for future advancements in transparent AI verification.
Related papers
- Blockchain As a Platform For Artificial Intelligence (AI) Transparency [0.0]
"Black box" problem in AI decision-making limits stakeholders' ability to understand, trust, and verify outcomes.
This paper explores the integration of blockchain with AI to improve decision traceability, provenance data, and model accountability.
Findings suggest that blockchain could be a technology for ensuring AI systems remain accountable, ethical, and aligned with regulatory standards.
arXiv Detail & Related papers (2025-03-07T01:57:26Z) - BotDetect: A Decentralized Federated Learning Framework for Detecting Financial Bots on the EVM Blockchains [3.4636217357968904]
This paper presents a decentralized federated learning (DFL) approach for detecting financial bots within Virtual Machine (EVM)-based blockchains.
The proposed framework leverages federated learning, orchestrated through smart contracts, to detect malicious bot behavior.
Experimental results show that our DFL framework achieves high detection accuracy while maintaining scalability and robustness.
arXiv Detail & Related papers (2025-01-21T13:15:43Z) - SoK: Decentralized AI (DeAI) [4.651101982820699]
We present a Systematization of Knowledge (SoK) for blockchain-based DeAI solutions.<n>We propose a taxonomy to classify existing DeAI protocols based on the model lifecycle.<n>We investigate how blockchain features contribute to enhancing the security, transparency, and trustworthiness of AI processes.
arXiv Detail & Related papers (2024-11-26T14:28:25Z) - Proof of Quality: A Costless Paradigm for Trustless Generative AI Model Inference on Blockchains [24.934767209724335]
Generative AI models have demonstrated powerful and disruptive capabilities in natural language and image tasks.
deploying these models in decentralized environments remains challenging.
We present a new inference paradigm called emphproof of quality (PoQ) to enable the deployment of arbitrarily large generative models on blockchain architecture.
arXiv Detail & Related papers (2024-05-28T08:00:54Z) - AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving [68.73885845181242]
We propose an Automatic Data Engine (AIDE) that automatically identifies issues, efficiently curates data, improves the model through auto-labeling, and verifies the model through generation of diverse scenarios.
We further establish a benchmark for open-world detection on AV datasets to comprehensively evaluate various learning paradigms, demonstrating our method's superior performance at a reduced cost.
arXiv Detail & Related papers (2024-03-26T04:27:56Z) - Trust the Process: Zero-Knowledge Machine Learning to Enhance Trust in
Generative AI Interactions [1.3688201404977818]
It explores using cryptographic techniques, particularly Zero-Knowledge Proofs (ZKPs), to address concerns regarding performance fairness and accuracy.
Applying ZKPs to Machine Learning models, known as ZKML (Zero-Knowledge Machine Learning), enables independent validation of AI-generated content.
We introduce snarkGPT, a practical ZKML implementation for transformers, to empower users to verify output accuracy and quality while preserving model privacy.
arXiv Detail & Related papers (2024-02-09T14:00:16Z) - Generative AI-enabled Blockchain Networks: Fundamentals, Applications,
and Case Study [73.87110604150315]
Generative Artificial Intelligence (GAI) has emerged as a promising solution to address challenges of blockchain technology.
In this paper, we first introduce GAI techniques, outline their applications, and discuss existing solutions for integrating GAI into blockchains.
arXiv Detail & Related papers (2024-01-28T10:46:17Z) - Auditing and Generating Synthetic Data with Controllable Trust Trade-offs [54.262044436203965]
We introduce a holistic auditing framework that comprehensively evaluates synthetic datasets and AI models.
It focuses on preventing bias and discrimination, ensures fidelity to the source data, assesses utility, robustness, and privacy preservation.
We demonstrate the framework's effectiveness by auditing various generative models across diverse use cases.
arXiv Detail & Related papers (2023-04-21T09:03:18Z) - AI Maintenance: A Robustness Perspective [91.28724422822003]
We introduce highlighted robustness challenges in the AI lifecycle and motivate AI maintenance by making analogies to car maintenance.
We propose an AI model inspection framework to detect and mitigate robustness risks.
Our proposal for AI maintenance facilitates robustness assessment, status tracking, risk scanning, model hardening, and regulation throughout the AI lifecycle.
arXiv Detail & Related papers (2023-01-08T15:02:38Z) - Data-Driven and SE-assisted AI Model Signal-Awareness Enhancement and
Introspection [61.571331422347875]
We propose a data-driven approach to enhance models' signal-awareness.
We combine the SE concept of code complexity with the AI technique of curriculum learning.
We achieve up to 4.8x improvement in model signal awareness.
arXiv Detail & Related papers (2021-11-10T17:58:18Z) - Trustworthy AI [75.99046162669997]
Brittleness to minor adversarial changes in the input data, ability to explain the decisions, address the bias in their training data, are some of the most prominent limitations.
We propose the tutorial on Trustworthy AI to address six critical issues in enhancing user and public trust in AI systems.
arXiv Detail & Related papers (2020-11-02T20:04:18Z) - Privacy-preserving Traffic Flow Prediction: A Federated Learning
Approach [61.64006416975458]
We propose a privacy-preserving machine learning technique named Federated Learning-based Gated Recurrent Unit neural network algorithm (FedGRU) for traffic flow prediction.
FedGRU differs from current centralized learning methods and updates universal learning models through a secure parameter aggregation mechanism.
It is shown that FedGRU's prediction accuracy is 90.96% higher than the advanced deep learning models.
arXiv Detail & Related papers (2020-03-19T13:07:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.