DLOVE: A new Security Evaluation Tool for Deep Learning Based Watermarking Techniques
- URL: http://arxiv.org/abs/2407.06552v1
- Date: Tue, 9 Jul 2024 05:18:14 GMT
- Title: DLOVE: A new Security Evaluation Tool for Deep Learning Based Watermarking Techniques
- Authors: Sudev Kumar Padhi, Sk. Subidh Ali,
- Abstract summary: Recent developments in Deep Neural Network (DNN) based watermarking techniques have shown remarkable performance.
In this paper, we performed a detailed security analysis of different DNN-based watermarking techniques.
We propose a new class of attack called the Deep Learning-based OVErwriting (DLOVE) attack.
- Score: 1.8416014644193066
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent developments in Deep Neural Network (DNN) based watermarking techniques have shown remarkable performance. The state-of-the-art DNN-based techniques not only surpass the robustness of classical watermarking techniques but also show their robustness against many image manipulation techniques. In this paper, we performed a detailed security analysis of different DNN-based watermarking techniques. We propose a new class of attack called the Deep Learning-based OVErwriting (DLOVE) attack, which leverages adversarial machine learning and overwrites the original embedded watermark with a targeted watermark in a watermarked image. To the best of our knowledge, this attack is the first of its kind. We have considered scenarios where watermarks are used to devise and formulate an adversarial attack in white box and black box settings. To show adaptability and efficiency, we launch our DLOVE attack analysis on seven different watermarking techniques, HiDDeN, ReDMark, PIMoG, Stegastamp, Aparecium, Distortion Agostic Deep Watermarking and Hiding Images in an Image. All these techniques use different approaches to create imperceptible watermarked images. Our attack analysis on these watermarking techniques with various constraints highlights the vulnerabilities of DNN-based watermarking. Extensive experimental results validate the capabilities of DLOVE. We propose DLOVE as a benchmark security analysis tool to test the robustness of future deep learning-based watermarking techniques.
Related papers
- UnMarker: A Universal Attack on Defensive Image Watermarking [4.013156524547072]
We present UnMarker -- the first practical universal attack on defensive watermarking.
UnMarker requires no detector feedback, no unrealistic knowledge of the watermarking scheme or similar models, and no advanced denoising pipelines.
Evaluations against SOTA schemes prove UnMarker's effectiveness.
arXiv Detail & Related papers (2024-05-14T07:05:18Z) - DeepEclipse: How to Break White-Box DNN-Watermarking Schemes [60.472676088146436]
We present obfuscation techniques that significantly differ from the existing white-box watermarking removal schemes.
DeepEclipse can evade watermark detection without prior knowledge of the underlying watermarking scheme.
Our evaluation reveals that DeepEclipse excels in breaking multiple white-box watermarking schemes.
arXiv Detail & Related papers (2024-03-06T10:24:47Z) - WAVES: Benchmarking the Robustness of Image Watermarks [67.955140223443]
WAVES (Watermark Analysis Via Enhanced Stress-testing) is a benchmark for assessing image watermark robustness.
We integrate detection and identification tasks and establish a standardized evaluation protocol comprised of a diverse range of stress tests.
We envision WAVES as a toolkit for the future development of robust watermarks.
arXiv Detail & Related papers (2024-01-16T18:58:36Z) - Watermark Vaccine: Adversarial Attacks to Prevent Watermark Removal [69.10633149787252]
We propose a novel defence mechanism by adversarial machine learning for good.
Two types of vaccines are proposed: Disrupting Watermark Vaccine (DWV) induces to ruin the host image along with watermark after passing through watermark-removal networks.
Inerasable Watermark Vaccine (IWV) works in another fashion of trying to keep the watermark not removed and still noticeable.
arXiv Detail & Related papers (2022-07-17T13:50:02Z) - Certified Neural Network Watermarks with Randomized Smoothing [64.86178395240469]
We propose a certifiable watermarking method for deep learning models.
We show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain l2 threshold.
Our watermark is also empirically more robust compared to previous watermarking methods.
arXiv Detail & Related papers (2022-07-16T16:06:59Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Piracy-Resistant DNN Watermarking by Block-Wise Image Transformation
with Secret Key [15.483078145498085]
The proposed method embeds a watermark pattern in a model by using learnable transformed images.
It is piracy-resistant, so the original watermark cannot be overwritten by a pirated watermark.
The results show that it was resilient against fine-tuning and pruning attacks while maintaining a high watermark-detection accuracy.
arXiv Detail & Related papers (2021-04-09T08:21:53Z) - Watermark Faker: Towards Forgery of Digital Image Watermarking [10.14145437847397]
We make the first attempt to develop digital image watermark fakers by using generative adversarial learning.
Our experiments show that the proposed watermark faker can effectively crack digital image watermarkers in both spatial and frequency domains.
arXiv Detail & Related papers (2021-03-23T12:28:00Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.