PCPT and ACPT: Copyright Protection and Traceability Scheme for DNN
Models
- URL: http://arxiv.org/abs/2206.02541v2
- Date: Tue, 28 Nov 2023 09:22:32 GMT
- Title: PCPT and ACPT: Copyright Protection and Traceability Scheme for DNN
Models
- Authors: Xuefeng Fan, Dahao Fu, Hangyu Gui, Xinpeng Zhang and Xiaoyi Zhou
- Abstract summary: Deep neural networks (DNNs) have achieved tremendous success in artificial intelligence (AI) fields.
DNN models can be easily illegally copied, redistributed, or abused by criminals.
- Score: 13.043683635373213
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Deep neural networks (DNNs) have achieved tremendous success in artificial
intelligence (AI) fields. However, DNN models can be easily illegally copied,
redistributed, or abused by criminals, seriously damaging the interests of
model inventors. The copyright protection of DNN models by neural network
watermarking has been studied, but the establishment of a traceability
mechanism for determining the authorized users of a leaked model is a new
problem driven by the demand for AI services. Because the existing traceability
mechanisms are used for models without watermarks, a small number of
false-positives are generated. Existing black-box active protection schemes
have loose authorization control and are vulnerable to forgery attacks.
Therefore, based on the idea of black-box neural network watermarking with the
video framing and image perceptual hash algorithm, a passive copyright
protection and traceability framework PCPT is proposed that uses an additional
class of DNN models, improving the existing traceability mechanism that yields
a small number of false-positives. Based on an authorization control strategy
and image perceptual hash algorithm, a DNN model active copyright protection
and traceability framework ACPT is proposed. This framework uses the
authorization control center constructed by the detector and verifier. This
approach realizes stricter authorization control, which establishes a strong
connection between users and model owners, improves the framework security, and
supports traceability verification.
Related papers
- IDEA: An Inverse Domain Expert Adaptation Based Active DNN IP Protection Method [8.717704777664604]
Illegitimate reproduction, distribution and derivation of Deep Neural Network (DNN) models can inflict economic loss, reputation damage and even privacy infringement.
We propose IDEA, an Inverse Domain Expert Adaptation based proactive DNN IP protection method featuring active authorization and source traceability.
We extensively evaluate IDEA on five datasets and four DNN models to demonstrate its effectiveness in authorization control, culprit tracing success rate, and against various attacks.
arXiv Detail & Related papers (2024-09-29T09:34:33Z) - A2-DIDM: Privacy-preserving Accumulator-enabled Auditing for Distributed Identity of DNN Model [43.10692581757967]
We propose a novel Accumulator-enabled Auditing for Distributed Identity of DNN Model (A2-DIDM)
A2-DIDM uses blockchain and zero-knowledge techniques to protect data and function privacy while ensuring the lightweight on-chain ownership verification.
arXiv Detail & Related papers (2024-05-07T08:24:50Z) - Securing Graph Neural Networks in MLaaS: A Comprehensive Realization of Query-based Integrity Verification [68.86863899919358]
We introduce a groundbreaking approach to protect GNN models in Machine Learning from model-centric attacks.
Our approach includes a comprehensive verification schema for GNN's integrity, taking into account both transductive and inductive GNNs.
We propose a query-based verification technique, fortified with innovative node fingerprint generation algorithms.
arXiv Detail & Related papers (2023-12-13T03:17:05Z) - Reversible Quantization Index Modulation for Static Deep Neural Network
Watermarking [57.96787187733302]
Reversible data hiding (RDH) methods offer a potential solution, but existing approaches suffer from weaknesses in terms of usability, capacity, and fidelity.
We propose a novel RDH-based static DNN watermarking scheme using quantization index modulation (QIM)
Our scheme incorporates a novel approach based on a one-dimensional quantizer for watermark embedding.
arXiv Detail & Related papers (2023-05-29T04:39:17Z) - Rethinking White-Box Watermarks on Deep Learning Models under Neural
Structural Obfuscation [24.07604618918671]
Copyright protection for deep neural networks (DNNs) is an urgent need for AI corporations.
White-box watermarking is believed to be accurate, credible and secure against most known watermark removal attacks.
We present the first systematic study on how the mainstream white-box watermarks are commonly vulnerable to neural structural obfuscation with textitdummy neurons.
arXiv Detail & Related papers (2023-03-17T02:21:41Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Reversible Watermarking in Deep Convolutional Neural Networks for
Integrity Authentication [78.165255859254]
We propose a reversible watermarking algorithm for integrity authentication.
The influence of embedding reversible watermarking on the classification performance is less than 0.5%.
At the same time, the integrity of the model can be verified by applying the reversible watermarking.
arXiv Detail & Related papers (2021-04-09T09:32:21Z) - HufuNet: Embedding the Left Piece as Watermark and Keeping the Right
Piece for Ownership Verification in Deep Neural Networks [16.388046449021466]
We propose a novel solution for watermarking deep neural networks (DNNs)
HufuNet is highly robust against model fine-tuning/pruning, kernels cutoff/supplement, functionality-equivalent attack, and fraudulent ownership claims.
arXiv Detail & Related papers (2021-03-25T06:55:22Z) - Automatically Lock Your Neural Networks When You're Away [5.153873824423363]
We propose Model-Lock (M-LOCK) to realize an end-to-end neural network with local dynamic access control.
Three kinds of model training strategy are essential to achieve the tremendous performance divergence between certified and suspect input in one neural network.
arXiv Detail & Related papers (2021-03-15T15:47:54Z) - Deep Model Intellectual Property Protection via Deep Watermarking [122.87871873450014]
Deep neural networks are exposed to serious IP infringement risks.
Given a target deep model, if the attacker knows its full information, it can be easily stolen by fine-tuning.
We propose a new model watermarking framework for protecting deep networks trained for low-level computer vision or image processing tasks.
arXiv Detail & Related papers (2021-03-08T18:58:21Z) - Graph Backdoor [53.70971502299977]
We present GTA, the first backdoor attack on graph neural networks (GNNs)
GTA departs in significant ways: it defines triggers as specific subgraphs, including both topological structures and descriptive features.
It can be instantiated for both transductive (e.g., node classification) and inductive (e.g., graph classification) tasks.
arXiv Detail & Related papers (2020-06-21T19:45:30Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.