D2RA: Dual Domain Regeneration Attack
- URL: http://arxiv.org/abs/2510.07538v1
- Date: Wed, 08 Oct 2025 20:54:22 GMT
- Title: D2RA: Dual Domain Regeneration Attack
- Authors: Pragati Shuddhodhan Meshram, Varun Chandrasekaran,
- Abstract summary: We present D2RA, a training-free, single-image attack that removes or weakens watermarks without access to the underlying model.<n>By projecting watermarked images onto natural priors across complementary representations, D2RA suppresses watermark signals while preserving visual fidelity.
- Score: 14.483783077617483
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The growing use of generative models has intensified the need for watermarking methods that ensure content attribution and provenance. While recent semantic watermarking schemes improve robustness by embedding signals in latent or frequency representations, we show they remain vulnerable even under resource-constrained adversarial settings. We present D2RA, a training-free, single-image attack that removes or weakens watermarks without access to the underlying model. By projecting watermarked images onto natural priors across complementary representations, D2RA suppresses watermark signals while preserving visual fidelity. Experiments across diverse watermarking schemes demonstrate that our approach consistently reduces watermark detectability, revealing fundamental weaknesses in current designs. Our code is available at https://github.com/Pragati-Meshram/DAWN.
Related papers
- AuthenLoRA: Entangling Stylization with Imperceptible Watermarks for Copyright-Secure LoRA Adapters [52.556959321030966]
Low-Rank Adaptation (LoRA) offers an efficient paradigm for customizing diffusion models.<n>Existing watermarking techniques either target base models or verify LoRA modules themselves.<n>We propose AuthenLoRA, a unified watermarking framework that embeds imperceptible, traceable watermarks directly into the LoRA training process.
arXiv Detail & Related papers (2025-11-26T09:48:11Z) - Transferable Black-Box One-Shot Forging of Watermarks via Image Preference Models [42.902365202924535]
We investigate watermark forging in the context of widely used post-hoc image watermarking.<n>We introduce a preference model to assess whether an image is watermarked.<n>We demonstrate the model's capability to remove and forge watermarks by optimizing the input image through backpropagation.
arXiv Detail & Related papers (2025-10-23T12:06:35Z) - Diffusion-Based Image Editing for Breaking Robust Watermarks [4.273350357872755]
Powerful diffusion-based image generation and editing techniques pose a new threat to robust watermarking schemes.<n>We show that a diffusion-driven image regeneration'' process can erase embedded watermarks while preserving image content.<n>We introduce a novel guided diffusion attack that explicitly targets the watermark signal during generation, significantly degrading watermark detectability.
arXiv Detail & Related papers (2025-10-07T14:34:42Z) - Towards Dataset Copyright Evasion Attack against Personalized Text-to-Image Diffusion Models [52.877452505561706]
We propose the first copyright evasion attack specifically designed to undermine dataset ownership verification (DOV)<n>Our CEAT2I comprises three stages: watermarked sample detection, trigger identification, and efficient watermark mitigation.<n>Our experiments show that our CEAT2I effectively evades DOV mechanisms while preserving model performance.
arXiv Detail & Related papers (2025-05-05T17:51:55Z) - Invisible Watermarks: Attacks and Robustness [0.3495246564946556]
We introduce novel improvements to watermarking robustness and minimize degradation on image quality during attack.<n>We propose a custom watermark remover network which preserves one of the watermarking modalities while completely removing the other during decoding.<n>Our evaluation suggests that 1) implementing the watermark remover model to preserve one of the watermark modalities when decoding the other modality slightly improves on the baseline performance, and that 2) LBA degrades the image significantly less compared to uniform blurring of the entire image.
arXiv Detail & Related papers (2024-12-17T03:50:13Z) - Black-Box Forgery Attacks on Semantic Watermarks for Diffusion Models [16.57738116313139]
We show that attackers can leverage unrelated models, even with different latent spaces and architectures, to perform powerful and realistic forgery attacks.<n>The first imprints a targeted watermark into real images by manipulating the latent representation of an arbitrary image in an unrelated LDM.<n>The second attack generates new images with the target watermark by inverting a watermarked image and re-generating it with an arbitrary prompt.
arXiv Detail & Related papers (2024-12-04T12:57:17Z) - WAVES: Benchmarking the Robustness of Image Watermarks [67.955140223443]
WAVES (Watermark Analysis Via Enhanced Stress-testing) is a benchmark for assessing image watermark robustness.
We integrate detection and identification tasks and establish a standardized evaluation protocol comprised of a diverse range of stress tests.
We envision WAVES as a toolkit for the future development of robust watermarks.
arXiv Detail & Related papers (2024-01-16T18:58:36Z) - Towards Robust Model Watermark via Reducing Parametric Vulnerability [57.66709830576457]
backdoor-based ownership verification becomes popular recently, in which the model owner can watermark the model.
We propose a mini-max formulation to find these watermark-removed models and recover their watermark behavior.
Our method improves the robustness of the model watermarking against parametric changes and numerous watermark-removal attacks.
arXiv Detail & Related papers (2023-09-09T12:46:08Z) - Invisible Image Watermarks Are Provably Removable Using Generative AI [47.25747266531665]
Invisible watermarks safeguard images' copyrights by embedding hidden messages only detectable by owners.
We propose a family of regeneration attacks to remove these invisible watermarks.
The proposed attack method first adds random noise to an image to destroy the watermark and then reconstructs the image.
arXiv Detail & Related papers (2023-06-02T23:29:28Z) - Certified Neural Network Watermarks with Randomized Smoothing [64.86178395240469]
We propose a certifiable watermarking method for deep learning models.
We show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain l2 threshold.
Our watermark is also empirically more robust compared to previous watermarking methods.
arXiv Detail & Related papers (2022-07-16T16:06:59Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.