Model Watermarking for Image Processing Networks
- URL: http://arxiv.org/abs/2002.11088v1
- Date: Tue, 25 Feb 2020 18:36:18 GMT
- Title: Model Watermarking for Image Processing Networks
- Authors: Jie Zhang, Dongdong Chen, Jing Liao, Han Fang, Weiming Zhang, Wenbo
Zhou, Hao Cui, Nenghai Yu
- Abstract summary: How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
- Score: 120.918532981871
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning has achieved tremendous success in numerous industrial
applications. As training a good model often needs massive high-quality data
and computation resources, the learned models often have significant business
values. However, these valuable deep models are exposed to a huge risk of
infringements. For example, if the attacker has the full information of one
target model including the network structure and weights, the model can be
easily finetuned on new datasets. Even if the attacker can only access the
output of the target model, he/she can still train another similar surrogate
model by generating a large scale of input-output training pairs. How to
protect the intellectual property of deep models is a very important but
seriously under-researched problem. There are a few recent attempts at
classification network protection only. In this paper, we propose the first
model watermarking framework for protecting image processing models. To achieve
this goal, we leverage the spatial invisible watermarking mechanism.
Specifically, given a black-box target model, a unified and invisible watermark
is hidden into its outputs, which can be regarded as a special task-agnostic
barrier. In this way, when the attacker trains one surrogate model by using the
input-output pairs of the target model, the hidden watermark will be learned
and extracted afterward. To enable watermarks from binary bits to
high-resolution images, both traditional and deep spatial invisible
watermarking mechanism are considered. Experiments demonstrate the robustness
of the proposed watermarking mechanism, which can resist surrogate models
learned with different network structures and objective functions. Besides deep
models, the proposed method is also easy to be extended to protect data and
traditional image processing algorithms.
Related papers
- Stealing Image-to-Image Translation Models With a Single Query [24.819964498441635]
We study the possibility of stealing image-to-image models.
We find that many such models can be stolen with as little as a single, small-sized, query image.
Remarkably, we find that the vulnerability to stealing attacks is shared by CNNs and by models with attention mechanisms.
arXiv Detail & Related papers (2024-06-02T18:30:41Z) - Probabilistically Robust Watermarking of Neural Networks [4.332441337407564]
We introduce a novel trigger set-based watermarking approach that demonstrates resilience against functionality stealing attacks.
Our approach does not require additional model training and can be applied to any model architecture.
arXiv Detail & Related papers (2024-01-16T10:32:13Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Reversible Watermarking in Deep Convolutional Neural Networks for
Integrity Authentication [78.165255859254]
We propose a reversible watermarking algorithm for integrity authentication.
The influence of embedding reversible watermarking on the classification performance is less than 0.5%.
At the same time, the integrity of the model can be verified by applying the reversible watermarking.
arXiv Detail & Related papers (2021-04-09T09:32:21Z) - Deep Model Intellectual Property Protection via Deep Watermarking [122.87871873450014]
Deep neural networks are exposed to serious IP infringement risks.
Given a target deep model, if the attacker knows its full information, it can be easily stolen by fine-tuning.
We propose a new model watermarking framework for protecting deep networks trained for low-level computer vision or image processing tasks.
arXiv Detail & Related papers (2021-03-08T18:58:21Z) - Don't Forget to Sign the Gradients! [60.98885980669777]
GradSigns is a novel watermarking framework for deep neural networks (DNNs)
We present GradSigns, a novel watermarking framework for deep neural networks (DNNs)
arXiv Detail & Related papers (2021-03-05T14:24:32Z) - Removing Backdoor-Based Watermarks in Neural Networks with Limited Data [26.050649487499626]
Trading deep models is highly demanded and lucrative nowadays.
naive trading schemes typically involve potential risks related to copyright and trustworthiness issues.
We propose a novel backdoor-based watermark removal framework using limited data, dubbed WILD.
arXiv Detail & Related papers (2020-08-02T06:25:26Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.