A General Approach for Using Deep Neural Network for Digital
Watermarking
- URL: http://arxiv.org/abs/2003.12428v1
- Date: Sun, 8 Mar 2020 06:22:04 GMT
- Title: A General Approach for Using Deep Neural Network for Digital
Watermarking
- Authors: Yurui Ming, Weiping Ding, Zehong Cao, Chin-Teng Lin
- Abstract summary: We propose a general deep neural network (DNN) based watermarking method to fulfill this goal.
To the best of our knowledge, we are the first to propose a general way to perform watermarking using DNN.
- Score: 45.15137284053717
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Technologies of the Internet of Things (IoT) facilitate digital contents such
as images being acquired in a massive way. However, consideration from the
privacy or legislation perspective still demands the need for intellectual
content protection. In this paper, we propose a general deep neural network
(DNN) based watermarking method to fulfill this goal. Instead of training a
neural network for protecting a specific image, we train on an image set and
use the trained model to protect a distinct test image set in a bulk manner.
Respective evaluations both from the subjective and objective aspects confirm
the supremacy and practicability of our proposed method. To demonstrate the
robustness of this general neural watermarking mechanism, commonly used
manipulations are applied to the watermarked image to examine the corresponding
extracted watermark, which still retains sufficient recognizable traits. To the
best of our knowledge, we are the first to propose a general way to perform
watermarking using DNN. Considering its performance and economy, it is
concluded that subsequent studies that generalize our work on utilizing DNN for
intellectual content protection is a promising research trend.
Related papers
- Protect-Your-IP: Scalable Source-Tracing and Attribution against Personalized Generation [19.250673262185767]
We propose a unified approach for image copyright source-tracing and attribution.
We introduce an innovative watermarking-attribution method that blends proactive and passive strategies.
We have conducted experiments using various celebrity portrait series sourced online.
arXiv Detail & Related papers (2024-05-26T15:14:54Z) - Safe and Robust Watermark Injection with a Single OoD Image [90.71804273115585]
Training a high-performance deep neural network requires large amounts of data and computational resources.
We propose a safe and robust backdoor-based watermark injection technique.
We induce random perturbation of model parameters during watermark injection to defend against common watermark removal attacks.
arXiv Detail & Related papers (2023-09-04T19:58:35Z) - Anti-Neuron Watermarking: Protecting Personal Data Against Unauthorized
Neural Model Training [50.308254937851814]
Personal data (e.g. images) could be exploited inappropriately to train deep neural network models without authorization.
By embedding a watermarking signature using specialized linear color transformation to user images, neural models will be imprinted with such a signature.
This is the first work to protect users' personal data from unauthorized usage in neural network training.
arXiv Detail & Related papers (2021-09-18T22:10:37Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Privacy-Preserving Image Acquisition Using Trainable Optical Kernel [50.1239616836174]
We propose a trainable image acquisition method that removes the sensitive identity revealing information in the optical domain before it reaches the image sensor.
As the sensitive content is suppressed before it reaches the image sensor, it does not enter the digital domain therefore is unretrievable by any sort of privacy attack.
arXiv Detail & Related papers (2021-06-28T11:08:14Z) - Fingerprinting Image-to-Image Generative Adversarial Networks [53.02510603622128]
Generative Adversarial Networks (GANs) have been widely used in various application scenarios.
This paper presents a novel fingerprinting scheme for the Intellectual Property protection of image-to-image GANs based on a trusted third party.
arXiv Detail & Related papers (2021-06-19T06:25:10Z) - Protecting the Intellectual Properties of Deep Neural Networks with an
Additional Class and Steganographic Images [7.234511676697502]
We propose a method to protect the intellectual properties of deep neural networks (DNN) models by using an additional class and steganographic images.
We adopt the least significant bit (LSB) image steganography to embed users' fingerprints into watermark key images.
On Fashion-MNIST and CIFAR-10 datasets, the proposed method can obtain 100% watermark accuracy and 100% fingerprint authentication success rate.
arXiv Detail & Related papers (2021-04-19T11:03:53Z) - Robust Black-box Watermarking for Deep NeuralNetwork using Inverse
Document Frequency [1.2502377311068757]
We propose a framework for watermarking a Deep Neural Networks (DNNs) model designed for a textual domain.
The proposed embedding procedure takes place in the model's training time, making the watermark verification stage straightforward.
The experimental results show that watermarked models have the same accuracy as the original ones.
arXiv Detail & Related papers (2021-03-09T17:56:04Z) - An Automated and Robust Image Watermarking Scheme Based on Deep Neural
Networks [8.765045867163648]
A robust and blind image watermarking scheme based on deep learning neural networks is proposed.
The robustness of the proposed scheme is achieved without requiring any prior knowledge or adversarial examples of possible attacks.
arXiv Detail & Related papers (2020-07-05T22:23:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.