Reversible Quantization Index Modulation for Static Deep Neural Network
Watermarking
- URL: http://arxiv.org/abs/2305.17879v2
- Date: Tue, 27 Jun 2023 15:49:51 GMT
- Title: Reversible Quantization Index Modulation for Static Deep Neural Network
Watermarking
- Authors: Junren Qin, Shanxiang Lyu, Fan Yang, Jiarui Deng, Zhihua Xia, Xiaochun
Cao
- Abstract summary: Reversible data hiding (RDH) methods offer a potential solution, but existing approaches suffer from weaknesses in terms of usability, capacity, and fidelity.
We propose a novel RDH-based static DNN watermarking scheme using quantization index modulation (QIM)
Our scheme incorporates a novel approach based on a one-dimensional quantizer for watermark embedding.
- Score: 57.96787187733302
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Static deep neural network (DNN) watermarking techniques typically employ
irreversible methods to embed watermarks into the DNN model weights. However,
this approach causes permanent damage to the watermarked model and fails to
meet the requirements of integrity authentication. Reversible data hiding (RDH)
methods offer a potential solution, but existing approaches suffer from
weaknesses in terms of usability, capacity, and fidelity, hindering their
practical adoption. In this paper, we propose a novel RDH-based static DNN
watermarking scheme using quantization index modulation (QIM). Our scheme
incorporates a novel approach based on a one-dimensional quantizer for
watermark embedding. Furthermore, we design two schemes to address the
challenges of integrity protection and legitimate authentication for DNNs.
Through simulation results on training loss and classification accuracy, we
demonstrate the feasibility and effectiveness of our proposed schemes,
highlighting their superior adaptability compared to existing methods.
Related papers
- ModelShield: Adaptive and Robust Watermark against Model Extraction Attack [58.46326901858431]
Large language models (LLMs) demonstrate general intelligence across a variety of machine learning tasks.
adversaries can still utilize model extraction attacks to steal the model intelligence encoded in model generation.
Watermarking technology offers a promising solution for defending against such attacks by embedding unique identifiers into the model-generated content.
arXiv Detail & Related papers (2024-05-03T06:41:48Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Rethinking White-Box Watermarks on Deep Learning Models under Neural
Structural Obfuscation [24.07604618918671]
Copyright protection for deep neural networks (DNNs) is an urgent need for AI corporations.
White-box watermarking is believed to be accurate, credible and secure against most known watermark removal attacks.
We present the first systematic study on how the mainstream white-box watermarks are commonly vulnerable to neural structural obfuscation with textitdummy neurons.
arXiv Detail & Related papers (2023-03-17T02:21:41Z) - Verifying Integrity of Deep Ensemble Models by Lossless Black-box
Watermarking with Sensitive Samples [17.881686153284267]
We propose a novel black-box watermarking method for deep ensemble models (DEMs)
In the proposed method, a certain number of sensitive samples are carefully selected through mimicking real-world DEM attacks.
By analyzing the prediction results of the target DEM on these carefully crafted sensitive samples, we are able to verify the integrity of the target DEM.
arXiv Detail & Related papers (2022-05-09T09:40:20Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Reversible Watermarking in Deep Convolutional Neural Networks for
Integrity Authentication [78.165255859254]
We propose a reversible watermarking algorithm for integrity authentication.
The influence of embedding reversible watermarking on the classification performance is less than 0.5%.
At the same time, the integrity of the model can be verified by applying the reversible watermarking.
arXiv Detail & Related papers (2021-04-09T09:32:21Z) - Robust Black-box Watermarking for Deep NeuralNetwork using Inverse
Document Frequency [1.2502377311068757]
We propose a framework for watermarking a Deep Neural Networks (DNNs) model designed for a textual domain.
The proposed embedding procedure takes place in the model's training time, making the watermark verification stage straightforward.
The experimental results show that watermarked models have the same accuracy as the original ones.
arXiv Detail & Related papers (2021-03-09T17:56:04Z) - Spread-Transform Dither Modulation Watermarking of Deep Neural Network [33.63490683496175]
We propose a new DNN watermarking algorithm that leverages on the watermarking with side information paradigm to decrease the obtrusiveness of the watermark and increase its payload.
In particular, the new scheme exploits the main ideas of ST-DM (Spread Transform Dither Modulation) watermarking to improve the performance of a recently proposed algorithm based on conventional SS.
arXiv Detail & Related papers (2020-12-28T10:23:17Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.