Don't Forget to Sign the Gradients!
- URL: http://arxiv.org/abs/2103.03701v1
- Date: Fri, 5 Mar 2021 14:24:32 GMT
- Title: Don't Forget to Sign the Gradients!
- Authors: Omid Aramoon, Pin-Yu Chen, Gang Qu
- Abstract summary: GradSigns is a novel watermarking framework for deep neural networks (DNNs)
We present GradSigns, a novel watermarking framework for deep neural networks (DNNs)
- Score: 60.98885980669777
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Engineering a top-notch deep learning model is an expensive procedure that
involves collecting data, hiring human resources with expertise in machine
learning, and providing high computational resources. For that reason, deep
learning models are considered as valuable Intellectual Properties (IPs) of the
model vendors. To ensure reliable commercialization of deep learning models, it
is crucial to develop techniques to protect model vendors against IP
infringements. One of such techniques that recently has shown great promise is
digital watermarking. However, current watermarking approaches can embed very
limited amount of information and are vulnerable against watermark removal
attacks. In this paper, we present GradSigns, a novel watermarking framework
for deep neural networks (DNNs). GradSigns embeds the owner's signature into
the gradient of the cross-entropy cost function with respect to inputs to the
model. Our approach has a negligible impact on the performance of the protected
model and it allows model vendors to remotely verify the watermark through
prediction APIs. We evaluate GradSigns on DNNs trained for different image
classification tasks using CIFAR-10, SVHN, and YTF datasets. Experimental
results show that GradSigns is robust against all known counter-watermark
attacks and can embed a large amount of information into DNNs.
Related papers
- DeepiSign-G: Generic Watermark to Stamp Hidden DNN Parameters for Self-contained Tracking [15.394110881491773]
DeepiSign-G is a versatile watermarking approach designed for comprehensive verification of leading DNN architectures, including CNNs and RNNs.
Unlike traditional hashing techniques, DeepiSign-G allows substantial metadata incorporation directly within the model, enabling detailed, self-contained tracking and verification.
We demonstrate DeepiSign-G's applicability across various architectures, including CNN models (VGG, ResNets, DenseNet) and RNNs (Text sentiment classifiers)
arXiv Detail & Related papers (2024-07-01T13:15:38Z) - ClearMark: Intuitive and Robust Model Watermarking via Transposed Model
Training [50.77001916246691]
This paper introduces ClearMark, the first DNN watermarking method designed for intuitive human assessment.
ClearMark embeds visible watermarks, enabling human decision-making without rigid value thresholds.
It shows an 8,544-bit watermark capacity comparable to the strongest existing work.
arXiv Detail & Related papers (2023-10-25T08:16:55Z) - Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - On Function-Coupled Watermarks for Deep Neural Networks [15.478746926391146]
We propose a novel DNN watermarking solution that can effectively defend against watermark removal attacks.
Our key insight is to enhance the coupling of the watermark and model functionalities.
Results show a 100% watermark authentication success rate under aggressive watermark removal attacks.
arXiv Detail & Related papers (2023-02-08T05:55:16Z) - SSLGuard: A Watermarking Scheme for Self-supervised Learning Pre-trained
Encoders [9.070481370120905]
We propose SSLGuard, the first watermarking algorithm for pre-trained encoders.
SSLGuard is effective in watermark injection and verification, and is robust against model stealing and other watermark removal attacks.
arXiv Detail & Related papers (2022-01-27T17:41:54Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Robust Black-box Watermarking for Deep NeuralNetwork using Inverse
Document Frequency [1.2502377311068757]
We propose a framework for watermarking a Deep Neural Networks (DNNs) model designed for a textual domain.
The proposed embedding procedure takes place in the model's training time, making the watermark verification stage straightforward.
The experimental results show that watermarked models have the same accuracy as the original ones.
arXiv Detail & Related papers (2021-03-09T17:56:04Z) - Deep Model Intellectual Property Protection via Deep Watermarking [122.87871873450014]
Deep neural networks are exposed to serious IP infringement risks.
Given a target deep model, if the attacker knows its full information, it can be easily stolen by fine-tuning.
We propose a new model watermarking framework for protecting deep networks trained for low-level computer vision or image processing tasks.
arXiv Detail & Related papers (2021-03-08T18:58:21Z) - Removing Backdoor-Based Watermarks in Neural Networks with Limited Data [26.050649487499626]
Trading deep models is highly demanded and lucrative nowadays.
naive trading schemes typically involve potential risks related to copyright and trustworthiness issues.
We propose a novel backdoor-based watermark removal framework using limited data, dubbed WILD.
arXiv Detail & Related papers (2020-08-02T06:25:26Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.