Deep Model Intellectual Property Protection via Deep Watermarking
- URL: http://arxiv.org/abs/2103.04980v1
- Date: Mon, 8 Mar 2021 18:58:21 GMT
- Title: Deep Model Intellectual Property Protection via Deep Watermarking
- Authors: Jie Zhang and Dongdong Chen and Jing Liao and Weiming Zhang and Huamin
Feng and Gang Hua and Nenghai Yu
- Abstract summary: Deep neural networks are exposed to serious IP infringement risks.
Given a target deep model, if the attacker knows its full information, it can be easily stolen by fine-tuning.
We propose a new model watermarking framework for protecting deep networks trained for low-level computer vision or image processing tasks.
- Score: 122.87871873450014
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Despite the tremendous success, deep neural networks are exposed to serious
IP infringement risks. Given a target deep model, if the attacker knows its
full information, it can be easily stolen by fine-tuning. Even if only its
output is accessible, a surrogate model can be trained through student-teacher
learning by generating many input-output training pairs. Therefore, deep model
IP protection is important and necessary. However, it is still seriously
under-researched. In this work, we propose a new model watermarking framework
for protecting deep networks trained for low-level computer vision or image
processing tasks. Specifically, a special task-agnostic barrier is added after
the target model, which embeds a unified and invisible watermark into its
outputs. When the attacker trains one surrogate model by using the input-output
pairs of the barrier target model, the hidden watermark will be learned and
extracted afterwards. To enable watermarks from binary bits to high-resolution
images, a deep invisible watermarking mechanism is designed. By jointly
training the target model and watermark embedding, the extra barrier can even
be absorbed into the target model. Through extensive experiments, we
demonstrate the robustness of the proposed framework, which can resist attacks
with different network structures and objective functions.
Related papers
- Probabilistically Robust Watermarking of Neural Networks [4.332441337407564]
We introduce a novel trigger set-based watermarking approach that demonstrates resilience against functionality stealing attacks.
Our approach does not require additional model training and can be applied to any model architecture.
arXiv Detail & Related papers (2024-01-16T10:32:13Z) - Performance-lossless Black-box Model Watermarking [69.22653003059031]
We propose a branch backdoor-based model watermarking protocol to protect model intellectual property.
In addition, we analyze the potential threats to the protocol and provide a secure and feasible watermarking instance for language models.
arXiv Detail & Related papers (2023-12-11T16:14:04Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Don't Forget to Sign the Gradients! [60.98885980669777]
GradSigns is a novel watermarking framework for deep neural networks (DNNs)
We present GradSigns, a novel watermarking framework for deep neural networks (DNNs)
arXiv Detail & Related papers (2021-03-05T14:24:32Z) - An Automated and Robust Image Watermarking Scheme Based on Deep Neural
Networks [8.765045867163648]
A robust and blind image watermarking scheme based on deep learning neural networks is proposed.
The robustness of the proposed scheme is achieved without requiring any prior knowledge or adversarial examples of possible attacks.
arXiv Detail & Related papers (2020-07-05T22:23:31Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.