Tokenized Model: A Blockchain-Empowered Decentralized Model Ownership
Verification Platform
- URL: http://arxiv.org/abs/2312.00048v1
- Date: Mon, 27 Nov 2023 09:02:57 GMT
- Title: Tokenized Model: A Blockchain-Empowered Decentralized Model Ownership
Verification Platform
- Authors: Yihao Li, Yanyi Lai, Tianchi Liao, Chuan Chen, Zibin Zheng
- Abstract summary: This paper considers combining model watermarking technology and blockchain to build a unified model copyright protection platform.
By a new solution we called Tokenized Model, it protects the model's copyright by reliable ownership record and verification mechanism.
- Score: 27.663600307841982
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: With the development of practical deep learning models like generative AI,
their excellent performance has brought huge economic value. For instance,
ChatGPT has attracted more than 100 million users in three months. Since the
model training requires a lot of data and computing power, a well-performing
deep learning model is behind a huge effort and cost. Facing various model
attacks, unauthorized use and abuse from the network that threaten the
interests of model owners, in addition to considering legal and other
administrative measures, it is equally important to protect the model's
copyright from the technical means. By using the model watermarking technology,
we point out the possibility of building a unified platform for model ownership
verification. Given the application history of blockchain in copyright
verification and the drawbacks of a centralized third-party, this paper
considers combining model watermarking technology and blockchain to build a
unified model copyright protection platform. By a new solution we called
Tokenized Model, it protects the model's copyright by reliable ownership record
and verification mechanism. It also promotes the financial value of model by
constructing the model's transaction process and contribution shares of a
model. In the typical case study, we also study the various performance under
usual scenario to verify the effectiveness of this platform.
Related papers
- Mitigating Downstream Model Risks via Model Provenance [28.390839690707256]
We propose a machine-readable model specification format to simplify the creation of model records.
Our solution explicitly traces relationships between upstream and downstream models, enhancing transparency and traceability.
This proof of concept aims to set a new standard for managing foundation models, bridging the gap between innovation and responsible model management.
arXiv Detail & Related papers (2024-10-03T05:52:15Z) - A2-DIDM: Privacy-preserving Accumulator-enabled Auditing for Distributed Identity of DNN Model [43.10692581757967]
We propose a novel Accumulator-enabled Auditing for Distributed Identity of DNN Model (A2-DIDM)
A2-DIDM uses blockchain and zero-knowledge techniques to protect data and function privacy while ensuring the lightweight on-chain ownership verification.
arXiv Detail & Related papers (2024-05-07T08:24:50Z) - Have You Merged My Model? On The Robustness of Large Language Model IP Protection Methods Against Model Merging [25.327483618051378]
We conduct the first study on the robustness of IP protection methods under model merging scenarios.
Experimental results indicate that current Large Language Model (LLM) watermarking techniques cannot survive in the merged models.
Our research aims to highlight that model merging should be an indispensable consideration in the robustness assessment of model IP protection techniques.
arXiv Detail & Related papers (2024-04-08T04:30:33Z) - Trustless Audits without Revealing Data or Models [49.23322187919369]
We show that it is possible to allow model providers to keep their model weights (but not architecture) and data secret while allowing other parties to trustlessly audit model and data properties.
We do this by designing a protocol called ZkAudit in which model providers publish cryptographic commitments of datasets and model weights.
arXiv Detail & Related papers (2024-04-06T04:43:06Z) - Towards Scalable and Robust Model Versioning [30.249607205048125]
Malicious incursions aimed at gaining access to deep learning models are on the rise.
We show how to generate multiple versions of a model that possess different attack properties.
We show theoretically that this can be accomplished by incorporating parameterized hidden distributions into the model training data.
arXiv Detail & Related papers (2024-01-17T19:55:49Z) - Performance-lossless Black-box Model Watermarking [69.22653003059031]
We propose a branch backdoor-based model watermarking protocol to protect model intellectual property.
In addition, we analyze the potential threats to the protocol and provide a secure and feasible watermarking instance for language models.
arXiv Detail & Related papers (2023-12-11T16:14:04Z) - Foundation Models and Fair Use [96.04664748698103]
In the U.S. and other countries, copyrighted content may be used to build foundation models without incurring liability due to the fair use doctrine.
In this work, we survey the potential risks of developing and deploying foundation models based on copyrighted content.
We discuss technical mitigations that can help foundation models stay in line with fair use.
arXiv Detail & Related papers (2023-03-28T03:58:40Z) - Are You Stealing My Model? Sample Correlation for Fingerprinting Deep
Neural Networks [86.55317144826179]
Previous methods always leverage the transferable adversarial examples as the model fingerprint.
We propose a novel yet simple model stealing detection method based on SAmple Correlation (SAC)
SAC successfully defends against various model stealing attacks, even including adversarial training or transfer learning.
arXiv Detail & Related papers (2022-10-21T02:07:50Z) - DeepHider: A Multi-module and Invisibility Watermarking Scheme for
Language Model [0.0]
This paper proposes a new threat of replacing the model classification module and performing global fine-tuning of the model.
We use the properties of blockchain such as tamper-proof and traceability to prevent the ownership statement of thieves.
Experiments show that the proposed scheme successfully verifies ownership with 100% watermark verification accuracy.
arXiv Detail & Related papers (2022-08-09T11:53:24Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - Defending against Model Stealing via Verifying Embedded External
Features [90.29429679125508]
adversaries can steal' deployed models even when they have no training samples and can not get access to the model parameters or structures.
We explore the defense from another angle by verifying whether a suspicious model contains the knowledge of defender-specified emphexternal features.
Our method is effective in detecting different types of model stealing simultaneously, even if the stolen model is obtained via a multi-stage stealing process.
arXiv Detail & Related papers (2021-12-07T03:51:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.