WMFormer++: Nested Transformer for Visible Watermark Removal via Implict
Joint Learning
- URL: http://arxiv.org/abs/2308.10195v2
- Date: Tue, 22 Aug 2023 02:55:39 GMT
- Title: WMFormer++: Nested Transformer for Visible Watermark Removal via Implict
Joint Learning
- Authors: Dongjian Huo, Zehong Zhang, Hanjing Su, Guanbin Li, Chaowei Fang,
Qingyao Wu
- Abstract summary: Existing watermark removal methods mainly rely on UNet with task-specific decoder branches.
We introduce an implicit joint learning paradigm to holistically integrate information from both branches.
The results demonstrate our approach's remarkable superiority, surpassing existing state-of-the-art methods by a large margin.
- Score: 68.00975867932331
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Watermarking serves as a widely adopted approach to safeguard media
copyright. In parallel, the research focus has extended to watermark removal
techniques, offering an adversarial means to enhance watermark robustness and
foster advancements in the watermarking field. Existing watermark removal
methods mainly rely on UNet with task-specific decoder branches--one for
watermark localization and the other for background image restoration. However,
watermark localization and background restoration are not isolated tasks;
precise watermark localization inherently implies regions necessitating
restoration, and the background restoration process contributes to more
accurate watermark localization. To holistically integrate information from
both branches, we introduce an implicit joint learning paradigm. This empowers
the network to autonomously navigate the flow of information between implicit
branches through a gate mechanism. Furthermore, we employ cross-channel
attention to facilitate local detail restoration and holistic structural
comprehension, while harnessing nested structures to integrate multi-scale
information. Extensive experiments are conducted on various challenging
benchmarks to validate the effectiveness of our proposed method. The results
demonstrate our approach's remarkable superiority, surpassing existing
state-of-the-art methods by a large margin.
Related papers
- De-mark: Watermark Removal in Large Language Models [59.00698153097887]
We present De-mark, an advanced framework designed to remove n-gram-based watermarks effectively.
Our method utilizes a novel querying strategy, termed random selection probing, which aids in assessing the strength of the watermark.
arXiv Detail & Related papers (2024-10-17T17:42:10Z) - Image Watermarks are Removable Using Controllable Regeneration from Clean Noise [26.09012436917272]
A critical attribute of watermark techniques is their robustness against various manipulations.
We introduce a watermark removal approach capable of effectively nullifying the state of the art watermarking techniques.
arXiv Detail & Related papers (2024-10-07T20:04:29Z) - Removing Interference and Recovering Content Imaginatively for Visible
Watermark Removal [63.576748565274706]
This study introduces the Removing Interference and Recovering Content Imaginatively (RIRCI) framework.
RIRCI embodies a two-stage approach: the initial phase centers on discerning and segregating the watermark component, while the subsequent phase focuses on background content restoration.
To achieve meticulous background restoration, our proposed model employs a dual-path network capable of fully exploring the intrinsic background information beneath semi-transparent watermarks.
arXiv Detail & Related papers (2023-12-22T02:19:23Z) - A Resilient and Accessible Distribution-Preserving Watermark for Large Language Models [65.40460716619772]
Our research focuses on the importance of a textbfDistribution-textbfPreserving (DiP) watermark.
Contrary to the current strategies, our proposed DiPmark simultaneously preserves the original token distribution during watermarking.
It is detectable without access to the language model API and prompts (accessible), and is provably robust to moderate changes of tokens.
arXiv Detail & Related papers (2023-10-11T17:57:35Z) - Robust Image Watermarking based on Cross-Attention and Invariant Domain
Learning [1.6589012298747952]
This paper explores a robust image watermarking methodology by harnessing cross-attention and invariant domain learning.
We design a watermark embedding technique utilizing a multi-head cross attention mechanism, enabling information exchange between the cover image and watermark.
Second, we advocate for learning an invariant domain representation that encapsulates both semantic and noise-invariant information concerning the watermark.
arXiv Detail & Related papers (2023-10-09T04:19:27Z) - Watermarking Images in Self-Supervised Latent Spaces [75.99287942537138]
We revisit watermarking techniques based on pre-trained deep networks, in the light of self-supervised approaches.
We present a way to embed both marks and binary messages into their latent spaces, leveraging data augmentation at marking time.
arXiv Detail & Related papers (2021-12-17T15:52:46Z) - Visible Watermark Removal via Self-calibrated Localization and
Background Refinement [21.632823897244037]
Superimposing visible watermarks on images provides a powerful weapon to cope with the copyright issue.
Modern watermark removal methods perform watermark localization and background restoration simultaneously.
We propose a two-stage multi-task network to address the above issues.
arXiv Detail & Related papers (2021-08-08T06:43:55Z) - Split then Refine: Stacked Attention-guided ResUNets for Blind Single
Image Visible Watermark Removal [69.92767260794628]
Previous watermark removal methods require to gain the watermark location from users or train a multi-task network to recover the background indiscriminately.
We propose a novel two-stage framework with a stacked attention-guided ResUNets to simulate the process of detection, removal and refinement.
We extensively evaluate our algorithm over four different datasets under various settings and the experiments show that our approach outperforms other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-13T09:05:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.