WAN: Watermarking Attack Network
- URL: http://arxiv.org/abs/2008.06255v3
- Date: Wed, 20 Oct 2021 13:32:30 GMT
- Title: WAN: Watermarking Attack Network
- Authors: Seung-Hun Nam, In-Jae Yu, Seung-Min Mun, Daesik Kim, Wonhyuk Ahn
- Abstract summary: Multi-bit watermarking (MW) has been developed to improve robustness against signal processing operations and geometric distortions.
benchmark tools that test robustness by applying simulated attacks on watermarked images are available.
We propose a watermarking attack network (WAN) that utilizes the weak points of the target MW and induces an inversion of the watermark bit.
- Score: 6.763499535329116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Multi-bit watermarking (MW) has been developed to improve robustness against
signal processing operations and geometric distortions. To this end, benchmark
tools that test robustness by applying simulated attacks on watermarked images
are available. However, limitations in these general attacks exist since they
cannot exploit specific characteristics of the targeted MW. In addition, these
attacks are usually devised without consideration of visual quality, which
rarely occurs in the real world. To address these limitations, we propose a
watermarking attack network (WAN), a fully trainable watermarking benchmark
tool that utilizes the weak points of the target MW and induces an inversion of
the watermark bit, thereby considerably reducing the watermark extractability.
To hinder the extraction of hidden information while ensuring high visual
quality, we utilize a residual dense blocks-based architecture specialized in
local and global feature learning. A novel watermarking attack loss is
introduced to break the MW systems. We empirically demonstrate that the WAN can
successfully fool various block-based MW systems. Moreover, we show that
existing MW methods can be improved with the help of the WAN as an add-on
module.
Related papers
- SWA-LDM: Toward Stealthy Watermarks for Latent Diffusion Models [11.906245347904289]
We introduce SWA-LDM, a novel approach that enhances watermarking by randomizing the embedding process.
Our proposed watermark presence attack reveals the inherent vulnerabilities of existing latent-based watermarking methods.
This work represents a pivotal step towards securing LDM-generated images against unauthorized use.
arXiv Detail & Related papers (2025-02-14T16:55:45Z) - ESpeW: Robust Copyright Protection for LLM-based EaaS via Embedding-Specific Watermark [50.08021440235581]
Embeds as a Service (Eding) is emerging as a crucial role in AI applications.
Eding is vulnerable to model extraction attacks, highlighting the urgent need for copyright protection.
We propose a novel embedding-specific watermarking (ESpeW) mechanism to offer robust copyright protection for Eding.
arXiv Detail & Related papers (2024-10-23T04:34:49Z) - Robustness of Watermarking on Text-to-Image Diffusion Models [9.277492743469235]
We investigate the robustness of generative watermarking, which is created from the integration of watermarking embedding and text-to-image generation processing.
We found that generative watermarking methods are robust to direct evasion attacks, like discriminator-based attacks, or manipulation based on the edge information in edge prediction-based attacks but vulnerable to malicious fine-tuning.
arXiv Detail & Related papers (2024-08-04T13:59:09Z) - Large Language Model Watermark Stealing With Mixed Integer Programming [51.336009662771396]
Large Language Model (LLM) watermark shows promise in addressing copyright, monitoring AI-generated text, and preventing its misuse.
Recent research indicates that watermarking methods using numerous keys are susceptible to removal attacks.
We propose a novel green list stealing attack against the state-of-the-art LLM watermark scheme.
arXiv Detail & Related papers (2024-05-30T04:11:17Z) - ModelShield: Adaptive and Robust Watermark against Model Extraction Attack [58.46326901858431]
Large language models (LLMs) demonstrate general intelligence across a variety of machine learning tasks.
adversaries can still utilize model extraction attacks to steal the model intelligence encoded in model generation.
Watermarking technology offers a promising solution for defending against such attacks by embedding unique identifiers into the model-generated content.
arXiv Detail & Related papers (2024-05-03T06:41:48Z) - WARDEN: Multi-Directional Backdoor Watermarks for Embedding-as-a-Service Copyright Protection [7.660430606056949]
We propose a new protocol to make the removal of watermarks more challenging by incorporating multiple possible watermark directions.
Our defense approach, WARDEN, notably increases the stealthiness of watermarks and has been empirically shown to be effective against CSE attack.
arXiv Detail & Related papers (2024-03-03T10:39:27Z) - Wide Flat Minimum Watermarking for Robust Ownership Verification of GANs [23.639074918667625]
We propose a novel multi-bit box-free watermarking method for GANs with improved robustness against white-box attacks.
The watermark is embedded by adding an extra watermarking loss term during GAN training.
We show that the presence of the watermark has a negligible impact on the quality of the generated images.
arXiv Detail & Related papers (2023-10-25T18:38:10Z) - Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.