ROSE: A RObust and SEcure DNN Watermarking
- URL: http://arxiv.org/abs/2206.11024v1
- Date: Wed, 22 Jun 2022 12:46:14 GMT
- Title: ROSE: A RObust and SEcure DNN Watermarking
- Authors: Kassem Kallas and Teddy Furon
- Abstract summary: This paper proposes a lightweight, robust, and secure black-box DNN watermarking protocol.
It takes advantage of cryptographic one-way functions as well as the injection of in-task key image-label pairs during the training process.
- Score: 14.2215880080698
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Protecting the Intellectual Property rights of DNN models is of primary
importance prior to their deployment. So far, the proposed methods either
necessitate changes to internal model parameters or the machine learning
pipeline, or they fail to meet both the security and robustness requirements.
This paper proposes a lightweight, robust, and secure black-box DNN
watermarking protocol that takes advantage of cryptographic one-way functions
as well as the injection of in-task key image-label pairs during the training
process. These pairs are later used to prove DNN model ownership during
testing. The main feature is that the value of the proof and its security are
measurable. The extensive experiments watermarking image classification models
for various datasets as well as exposing them to a variety of attacks, show
that it provides protection while maintaining an adequate level of security and
robustness.
Related papers
- DeepiSign-G: Generic Watermark to Stamp Hidden DNN Parameters for Self-contained Tracking [15.394110881491773]
DeepiSign-G is a versatile watermarking approach designed for comprehensive verification of leading DNN architectures, including CNNs and RNNs.
Unlike traditional hashing techniques, DeepiSign-G allows substantial metadata incorporation directly within the model, enabling detailed, self-contained tracking and verification.
We demonstrate DeepiSign-G's applicability across various architectures, including CNN models (VGG, ResNets, DenseNet) and RNNs (Text sentiment classifiers)
arXiv Detail & Related papers (2024-07-01T13:15:38Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Mixer: DNN Watermarking using Image Mixup [14.2215880080698]
This paper proposes a lightweight, reliable, and secure DNN watermarking that attempts to establish strong ties between these two tasks.
The samples triggering the watermarking task are generated using image Mixup either from training or testing samples.
arXiv Detail & Related papers (2022-12-06T08:09:53Z) - Black-box Dataset Ownership Verification via Backdoor Watermarking [67.69308278379957]
We formulate the protection of released datasets as verifying whether they are adopted for training a (suspicious) third-party model.
We propose to embed external patterns via backdoor watermarking for the ownership verification to protect them.
Specifically, we exploit poison-only backdoor attacks ($e.g.$, BadNets) for dataset watermarking and design a hypothesis-test-guided method for dataset verification.
arXiv Detail & Related papers (2022-08-04T05:32:20Z) - MOVE: Effective and Harmless Ownership Verification via Embedded
External Features [109.19238806106426]
We propose an effective and harmless model ownership verification (MOVE) to defend against different types of model stealing simultaneously.
We conduct the ownership verification by verifying whether a suspicious model contains the knowledge of defender-specified external features.
In particular, we develop our MOVE method under both white-box and black-box settings to provide comprehensive model protection.
arXiv Detail & Related papers (2022-08-04T02:22:29Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Robust Black-box Watermarking for Deep NeuralNetwork using Inverse
Document Frequency [1.2502377311068757]
We propose a framework for watermarking a Deep Neural Networks (DNNs) model designed for a textual domain.
The proposed embedding procedure takes place in the model's training time, making the watermark verification stage straightforward.
The experimental results show that watermarked models have the same accuracy as the original ones.
arXiv Detail & Related papers (2021-03-09T17:56:04Z) - Don't Forget to Sign the Gradients! [60.98885980669777]
GradSigns is a novel watermarking framework for deep neural networks (DNNs)
We present GradSigns, a novel watermarking framework for deep neural networks (DNNs)
arXiv Detail & Related papers (2021-03-05T14:24:32Z) - Model Watermarking for Image Processing Networks [120.918532981871]
How to protect the intellectual property of deep models is a very important but seriously under-researched problem.
We propose the first model watermarking framework for protecting image processing models.
arXiv Detail & Related papers (2020-02-25T18:36:18Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.