Reversible Watermarking in Deep Convolutional Neural Networks for
Integrity Authentication
- URL: http://arxiv.org/abs/2104.04268v1
- Date: Fri, 9 Apr 2021 09:32:21 GMT
- Title: Reversible Watermarking in Deep Convolutional Neural Networks for
Integrity Authentication
- Authors: Xiquan Guan, Huamin Feng, Weiming Zhang, Hang Zhou, Jie Zhang, and
Nenghai Yu
- Abstract summary: We propose a reversible watermarking algorithm for integrity authentication.
The influence of embedding reversible watermarking on the classification performance is less than 0.5%.
At the same time, the integrity of the model can be verified by applying the reversible watermarking.
- Score: 78.165255859254
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep convolutional neural networks have made outstanding contributions in
many fields such as computer vision in the past few years and many researchers
published well-trained network for downloading. But recent studies have shown
serious concerns about integrity due to model-reuse attacks and backdoor
attacks. In order to protect these open-source networks, many algorithms have
been proposed such as watermarking. However, these existing algorithms modify
the contents of the network permanently and are not suitable for integrity
authentication. In this paper, we propose a reversible watermarking algorithm
for integrity authentication. Specifically, we present the reversible
watermarking problem of deep convolutional neural networks and utilize the
pruning theory of model compression technology to construct a host sequence
used for embedding watermarking information by histogram shift. As shown in the
experiments, the influence of embedding reversible watermarking on the
classification performance is less than 0.5% and the parameters of the model
can be fully recovered after extracting the watermarking. At the same time, the
integrity of the model can be verified by applying the reversible watermarking:
if the model is modified illegally, the authentication information generated by
original model will be absolutely different from the extracted watermarking
information.
Related papers
- Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - An Unforgeable Publicly Verifiable Watermark for Large Language Models [84.2805275589553]
Current watermark detection algorithms require the secret key used in the watermark generation process, making them susceptible to security breaches and counterfeiting during public detection.
We propose an unforgeable publicly verifiable watermark algorithm named UPV that uses two different neural networks for watermark generation and detection, instead of using the same key at both stages.
arXiv Detail & Related papers (2023-07-30T13:43:27Z) - On Function-Coupled Watermarks for Deep Neural Networks [15.478746926391146]
We propose a novel DNN watermarking solution that can effectively defend against watermark removal attacks.
Our key insight is to enhance the coupling of the watermark and model functionalities.
Results show a 100% watermark authentication success rate under aggressive watermark removal attacks.
arXiv Detail & Related papers (2023-02-08T05:55:16Z) - Knowledge-Free Black-Box Watermark and Ownership Proof for Image
Classification Neural Networks [9.117248639119529]
We propose a knowledge-free black-box watermarking scheme for image classification neural networks.
A delicate encoding and verification protocol is designed to ensure the scheme's knowledgable security against adversaries.
Experiment results proved the functionality-preserving capability and security of the proposed watermarking scheme.
arXiv Detail & Related papers (2022-04-09T18:09:02Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Robust Black-box Watermarking for Deep NeuralNetwork using Inverse
Document Frequency [1.2502377311068757]
We propose a framework for watermarking a Deep Neural Networks (DNNs) model designed for a textual domain.
The proposed embedding procedure takes place in the model's training time, making the watermark verification stage straightforward.
The experimental results show that watermarked models have the same accuracy as the original ones.
arXiv Detail & Related papers (2021-03-09T17:56:04Z) - Deep Model Intellectual Property Protection via Deep Watermarking [122.87871873450014]
Deep neural networks are exposed to serious IP infringement risks.
Given a target deep model, if the attacker knows its full information, it can be easily stolen by fine-tuning.
We propose a new model watermarking framework for protecting deep networks trained for low-level computer vision or image processing tasks.
arXiv Detail & Related papers (2021-03-08T18:58:21Z) - Removing Backdoor-Based Watermarks in Neural Networks with Limited Data [26.050649487499626]
Trading deep models is highly demanded and lucrative nowadays.
naive trading schemes typically involve potential risks related to copyright and trustworthiness issues.
We propose a novel backdoor-based watermark removal framework using limited data, dubbed WILD.
arXiv Detail & Related papers (2020-08-02T06:25:26Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.