Knowledge-Free Black-Box Watermark and Ownership Proof for Image
Classification Neural Networks
- URL: http://arxiv.org/abs/2204.04522v1
- Date: Sat, 9 Apr 2022 18:09:02 GMT
- Title: Knowledge-Free Black-Box Watermark and Ownership Proof for Image
Classification Neural Networks
- Authors: Fangqi Li and Shilin Wang
- Abstract summary: We propose a knowledge-free black-box watermarking scheme for image classification neural networks.
A delicate encoding and verification protocol is designed to ensure the scheme's knowledgable security against adversaries.
Experiment results proved the functionality-preserving capability and security of the proposed watermarking scheme.
- Score: 9.117248639119529
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Watermarking has become a plausible candidate for ownership verification and
intellectual property protection of deep neural networks. Regarding image
classification neural networks, current watermarking schemes uniformly resort
to backdoor triggers. However, injecting a backdoor into a neural network
requires knowledge of the training dataset, which is usually unavailable in the
real-world commercialization. Meanwhile, established watermarking schemes
oversight the potential damage of exposed evidence during ownership
verification and the watermarking algorithms themselves. Those concerns decline
current watermarking schemes from industrial applications. To confront these
challenges, we propose a knowledge-free black-box watermarking scheme for image
classification neural networks. The image generator obtained from a data-free
distillation process is leveraged to stabilize the network's performance during
the backdoor injection. A delicate encoding and verification protocol is
designed to ensure the scheme's security against knowledgable adversaries. We
also give a pioneering analysis of the capacity of the watermarking scheme.
Experiment results proved the functionality-preserving capability and security
of the proposed watermarking scheme.
Related papers
- DeepEclipse: How to Break White-Box DNN-Watermarking Schemes [60.472676088146436]
We present obfuscation techniques that significantly differ from the existing white-box watermarking removal schemes.
DeepEclipse can evade watermark detection without prior knowledge of the underlying watermarking scheme.
Our evaluation reveals that DeepEclipse excels in breaking multiple white-box watermarking schemes.
arXiv Detail & Related papers (2024-03-06T10:24:47Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - OVLA: Neural Network Ownership Verification using Latent Watermarks [7.661766773170363]
We present a novel methodology for neural network ownership verification based on latent watermarks.
We show that our approach offers strong defense against backdoor detection, backdoor removal and surrogate model attacks.
arXiv Detail & Related papers (2023-06-15T17:45:03Z) - Anti-Neuron Watermarking: Protecting Personal Data Against Unauthorized
Neural Model Training [50.308254937851814]
Personal data (e.g. images) could be exploited inappropriately to train deep neural network models without authorization.
By embedding a watermarking signature using specialized linear color transformation to user images, neural models will be imprinted with such a signature.
This is the first work to protect users' personal data from unauthorized usage in neural network training.
arXiv Detail & Related papers (2021-09-18T22:10:37Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Reversible Watermarking in Deep Convolutional Neural Networks for
Integrity Authentication [78.165255859254]
We propose a reversible watermarking algorithm for integrity authentication.
The influence of embedding reversible watermarking on the classification performance is less than 0.5%.
At the same time, the integrity of the model can be verified by applying the reversible watermarking.
arXiv Detail & Related papers (2021-04-09T09:32:21Z) - Deep Model Intellectual Property Protection via Deep Watermarking [122.87871873450014]
Deep neural networks are exposed to serious IP infringement risks.
Given a target deep model, if the attacker knows its full information, it can be easily stolen by fine-tuning.
We propose a new model watermarking framework for protecting deep networks trained for low-level computer vision or image processing tasks.
arXiv Detail & Related papers (2021-03-08T18:58:21Z) - An Automated and Robust Image Watermarking Scheme Based on Deep Neural
Networks [8.765045867163648]
A robust and blind image watermarking scheme based on deep learning neural networks is proposed.
The robustness of the proposed scheme is achieved without requiring any prior knowledge or adversarial examples of possible attacks.
arXiv Detail & Related papers (2020-07-05T22:23:31Z) - Neural Network Laundering: Removing Black-Box Backdoor Watermarks from
Deep Neural Networks [17.720400846604907]
We propose a neural network "laundering" algorithm to remove black-box backdoor watermarks from neural networks.
For all backdoor watermarking methods addressed in this paper, we find that the robustness of the watermark is significantly weaker than the original claims.
arXiv Detail & Related papers (2020-04-22T19:02:47Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.