A2-DIDM: Privacy-preserving Accumulator-enabled Auditing for Distributed Identity of DNN Model
- URL: http://arxiv.org/abs/2405.04108v1
- Date: Tue, 7 May 2024 08:24:50 GMT
- Title: A2-DIDM: Privacy-preserving Accumulator-enabled Auditing for Distributed Identity of DNN Model
- Authors: Tianxiu Xie, Keke Gai, Jing Yu, Liehuang Zhu, Kim-Kwang Raymond Choo,
- Abstract summary: We propose a novel Accumulator-enabled Auditing for Distributed Identity of DNN Model (A2-DIDM)
A2-DIDM uses blockchain and zero-knowledge techniques to protect data and function privacy while ensuring the lightweight on-chain ownership verification.
- Score: 43.10692581757967
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent booming development of Generative Artificial Intelligence (GenAI) has facilitated an emerging model commercialization for the purpose of reinforcement on model performance, such as licensing or trading Deep Neural Network (DNN) models. However, DNN model trading may trigger concerns of the unauthorized replications or misuses over the model, so that the benefit of the model ownership will be violated. Model identity auditing is a challenging issue in protecting intellectual property of DNN models and verifying the integrity and ownership of models for guaranteeing trusts in transactions is one of the critical obstacles. In this paper, we focus on the above issue and propose a novel Accumulator-enabled Auditing for Distributed Identity of DNN Model (A2-DIDM) that utilizes blockchain and zero-knowledge techniques to protect data and function privacy while ensuring the lightweight on-chain ownership verification. The proposed model presents a scheme of identity records via configuring model weight checkpoints with corresponding zero-knowledge proofs, which incorporates predicates to capture incremental state changes in model weight checkpoints. Our scheme ensures both computational integrity of DNN training process and programmability, so that the uniqueness of the weight checkpoint sequence in a DNN model is preserved, ensuring the correctness of the model identity auditing. In addition, A2-DIDM also addresses privacy protections in distributed identity via a proposed method of accumulators. We systematically analyze the security and robustness of our proposed model and further evaluate the effectiveness and usability of auditing DNN model identities.
Related papers
- Data-Driven Lipschitz Continuity: A Cost-Effective Approach to Improve Adversarial Robustness [47.9744734181236]
We explore the concept of Lipschitz continuity to certify the robustness of deep neural networks (DNNs) against adversarial attacks.
We propose a novel algorithm that remaps the input domain into a constrained range, reducing the Lipschitz constant and potentially enhancing robustness.
Our method achieves the best robust accuracy for CIFAR10, CIFAR100, and ImageNet datasets on the RobustBench leaderboard.
arXiv Detail & Related papers (2024-06-28T03:10:36Z) - Trustless Audits without Revealing Data or Models [49.23322187919369]
We show that it is possible to allow model providers to keep their model weights (but not architecture) and data secret while allowing other parties to trustlessly audit model and data properties.
We do this by designing a protocol called ZkAudit in which model providers publish cryptographic commitments of datasets and model weights.
arXiv Detail & Related papers (2024-04-06T04:43:06Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - ZKROWNN: Zero Knowledge Right of Ownership for Neural Networks [14.435398248169774]
We present ZKROWNN, the first automated end-to-end framework utilizing Zero-Knowledge Proofs (ZKP)
ZKROWNN permits a third party client to verify model ownership in less than a second, requiring as little as a few KBs of communication.
arXiv Detail & Related papers (2023-09-13T08:06:13Z) - Confidence Attention and Generalization Enhanced Distillation for
Continuous Video Domain Adaptation [62.458968086881555]
Continuous Video Domain Adaptation (CVDA) is a scenario where a source model is required to adapt to a series of individually available changing target domains.
We propose a Confidence-Attentive network with geneRalization enhanced self-knowledge disTillation (CART) to address the challenge in CVDA.
arXiv Detail & Related papers (2023-03-18T16:40:10Z) - InFIP: An Explainable DNN Intellectual Property Protection Method based
on Intrinsic Features [12.037142903022891]
We propose an interpretable intellectual property protection method for Deep Neural Networks (DNNs) based on explainable artificial intelligence.
The proposed method does not modify the DNN model, and the decision of the ownership verification is interpretable.
Experimental results demonstrate that the fingerprints can be successfully used to verify the ownership of the model.
arXiv Detail & Related papers (2022-10-14T03:12:36Z) - Robust and Lossless Fingerprinting of Deep Neural Networks via Pooled
Membership Inference [17.881686153284267]
Deep neural networks (DNNs) have already achieved great success in a lot of application areas and brought profound changes to our society.
How to protect the intellectual property (IP) of DNNs against infringement is one of the most important yet very challenging topics.
This paper proposes a novel technique called emphpooled membership inference (PMI) so as to protect the IP of the DNN models.
arXiv Detail & Related papers (2022-09-09T04:06:29Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - PCPT and ACPT: Copyright Protection and Traceability Scheme for DNN
Models [13.043683635373213]
Deep neural networks (DNNs) have achieved tremendous success in artificial intelligence (AI) fields.
DNN models can be easily illegally copied, redistributed, or abused by criminals.
arXiv Detail & Related papers (2022-06-06T12:12:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.