WMFormer++: Nested Transformer for Visible Watermark Removal via Implict
Joint Learning
- URL: http://arxiv.org/abs/2308.10195v2
- Date: Tue, 22 Aug 2023 02:55:39 GMT
- Title: WMFormer++: Nested Transformer for Visible Watermark Removal via Implict
Joint Learning
- Authors: Dongjian Huo, Zehong Zhang, Hanjing Su, Guanbin Li, Chaowei Fang,
Qingyao Wu
- Abstract summary: Existing watermark removal methods mainly rely on UNet with task-specific decoder branches.
We introduce an implicit joint learning paradigm to holistically integrate information from both branches.
The results demonstrate our approach's remarkable superiority, surpassing existing state-of-the-art methods by a large margin.
- Score: 68.00975867932331
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Watermarking serves as a widely adopted approach to safeguard media
copyright. In parallel, the research focus has extended to watermark removal
techniques, offering an adversarial means to enhance watermark robustness and
foster advancements in the watermarking field. Existing watermark removal
methods mainly rely on UNet with task-specific decoder branches--one for
watermark localization and the other for background image restoration. However,
watermark localization and background restoration are not isolated tasks;
precise watermark localization inherently implies regions necessitating
restoration, and the background restoration process contributes to more
accurate watermark localization. To holistically integrate information from
both branches, we introduce an implicit joint learning paradigm. This empowers
the network to autonomously navigate the flow of information between implicit
branches through a gate mechanism. Furthermore, we employ cross-channel
attention to facilitate local detail restoration and holistic structural
comprehension, while harnessing nested structures to integrate multi-scale
information. Extensive experiments are conducted on various challenging
benchmarks to validate the effectiveness of our proposed method. The results
demonstrate our approach's remarkable superiority, surpassing existing
state-of-the-art methods by a large margin.
Related papers
- On the Coexistence and Ensembling of Watermarks [93.15379331904602]
We find that various open-source watermarks can coexist with only minor impacts on image quality and decoding robustness.
We show how ensembling can increase the overall message capacity and enable new trade-offs between capacity, accuracy, robustness and image quality, without needing to retrain the base models.
arXiv Detail & Related papers (2025-01-29T00:37:06Z) - Watermarking in Diffusion Model: Gaussian Shading with Exact Diffusion Inversion via Coupled Transformations (EDICT) [0.0]
This paper introduces a novel approach to enhance the performance of Gaussian Shading.
We propose to leverage EDICT's ability to derive exact inverse mappings to refine this process.
Our method involves duplicating the watermark-infused noisy latent and employing a reciprocal, alternating denoising and noising scheme.
arXiv Detail & Related papers (2025-01-15T06:04:18Z) - De-mark: Watermark Removal in Large Language Models [59.00698153097887]
We present De-mark, an advanced framework designed to remove n-gram-based watermarks effectively.
Our method utilizes a novel querying strategy, termed random selection probing, which aids in assessing the strength of the watermark.
arXiv Detail & Related papers (2024-10-17T17:42:10Z) - Removing Interference and Recovering Content Imaginatively for Visible
Watermark Removal [63.576748565274706]
This study introduces the Removing Interference and Recovering Content Imaginatively (RIRCI) framework.
RIRCI embodies a two-stage approach: the initial phase centers on discerning and segregating the watermark component, while the subsequent phase focuses on background content restoration.
To achieve meticulous background restoration, our proposed model employs a dual-path network capable of fully exploring the intrinsic background information beneath semi-transparent watermarks.
arXiv Detail & Related papers (2023-12-22T02:19:23Z) - Robust Image Watermarking based on Cross-Attention and Invariant Domain
Learning [1.6589012298747952]
This paper explores a robust image watermarking methodology by harnessing cross-attention and invariant domain learning.
We design a watermark embedding technique utilizing a multi-head cross attention mechanism, enabling information exchange between the cover image and watermark.
Second, we advocate for learning an invariant domain representation that encapsulates both semantic and noise-invariant information concerning the watermark.
arXiv Detail & Related papers (2023-10-09T04:19:27Z) - Watermarking Images in Self-Supervised Latent Spaces [75.99287942537138]
We revisit watermarking techniques based on pre-trained deep networks, in the light of self-supervised approaches.
We present a way to embed both marks and binary messages into their latent spaces, leveraging data augmentation at marking time.
arXiv Detail & Related papers (2021-12-17T15:52:46Z) - Visible Watermark Removal via Self-calibrated Localization and
Background Refinement [21.632823897244037]
Superimposing visible watermarks on images provides a powerful weapon to cope with the copyright issue.
Modern watermark removal methods perform watermark localization and background restoration simultaneously.
We propose a two-stage multi-task network to address the above issues.
arXiv Detail & Related papers (2021-08-08T06:43:55Z) - Split then Refine: Stacked Attention-guided ResUNets for Blind Single
Image Visible Watermark Removal [69.92767260794628]
Previous watermark removal methods require to gain the watermark location from users or train a multi-task network to recover the background indiscriminately.
We propose a novel two-stage framework with a stacked attention-guided ResUNets to simulate the process of detection, removal and refinement.
We extensively evaluate our algorithm over four different datasets under various settings and the experiments show that our approach outperforms other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-13T09:05:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.