Robust Image Watermarking based on Cross-Attention and Invariant Domain
Learning
- URL: http://arxiv.org/abs/2310.05395v1
- Date: Mon, 9 Oct 2023 04:19:27 GMT
- Title: Robust Image Watermarking based on Cross-Attention and Invariant Domain
Learning
- Authors: Agnibh Dasgupta, Xin Zhong
- Abstract summary: This paper explores a robust image watermarking methodology by harnessing cross-attention and invariant domain learning.
We design a watermark embedding technique utilizing a multi-head cross attention mechanism, enabling information exchange between the cover image and watermark.
Second, we advocate for learning an invariant domain representation that encapsulates both semantic and noise-invariant information concerning the watermark.
- Score: 1.6589012298747952
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Image watermarking involves embedding and extracting watermarks within a
cover image, with deep learning approaches emerging to bolster generalization
and robustness. Predominantly, current methods employ convolution and
concatenation for watermark embedding, while also integrating conceivable
augmentation in the training process. This paper explores a robust image
watermarking methodology by harnessing cross-attention and invariant domain
learning, marking two novel, significant advancements. First, we design a
watermark embedding technique utilizing a multi-head cross attention mechanism,
enabling information exchange between the cover image and watermark to identify
semantically suitable embedding locations. Second, we advocate for learning an
invariant domain representation that encapsulates both semantic and
noise-invariant information concerning the watermark, shedding light on
promising avenues for enhancing image watermarking techniques.
Related papers
- Certifiably Robust Image Watermark [57.546016845801134]
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns.
Watermarking AI-generated content is a key technology to address these concerns.
We propose the first image watermarks with certified robustness guarantees against removal and forgery attacks.
arXiv Detail & Related papers (2024-07-04T17:56:04Z) - RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images with Provable Guarantees [33.61946642460661]
This paper introduces a robust and agile watermark detection framework, dubbed as RAW.
We employ a classifier that is jointly trained with the watermark to detect the presence of the watermark.
We show that the framework provides provable guarantees regarding the false positive rate for misclassifying a watermarked image.
arXiv Detail & Related papers (2024-01-23T22:00:49Z) - Removing Interference and Recovering Content Imaginatively for Visible
Watermark Removal [63.576748565274706]
This study introduces the Removing Interference and Recovering Content Imaginatively (RIRCI) framework.
RIRCI embodies a two-stage approach: the initial phase centers on discerning and segregating the watermark component, while the subsequent phase focuses on background content restoration.
To achieve meticulous background restoration, our proposed model employs a dual-path network capable of fully exploring the intrinsic background information beneath semi-transparent watermarks.
arXiv Detail & Related papers (2023-12-22T02:19:23Z) - T2IW: Joint Text to Image & Watermark Generation [74.20148555503127]
We introduce a novel task for the joint generation of text to image and watermark (T2IW)
This T2IW scheme ensures minimal damage to image quality when generating a compound image by forcing the semantic feature and the watermark signal to be compatible in pixels.
We demonstrate remarkable achievements in image quality, watermark invisibility, and watermark robustness, supported by our proposed set of evaluation metrics.
arXiv Detail & Related papers (2023-09-07T16:12:06Z) - WMFormer++: Nested Transformer for Visible Watermark Removal via Implict
Joint Learning [68.00975867932331]
Existing watermark removal methods mainly rely on UNet with task-specific decoder branches.
We introduce an implicit joint learning paradigm to holistically integrate information from both branches.
The results demonstrate our approach's remarkable superiority, surpassing existing state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2023-08-20T07:56:34Z) - Watermarking Images in Self-Supervised Latent Spaces [75.99287942537138]
We revisit watermarking techniques based on pre-trained deep networks, in the light of self-supervised approaches.
We present a way to embed both marks and binary messages into their latent spaces, leveraging data augmentation at marking time.
arXiv Detail & Related papers (2021-12-17T15:52:46Z) - Visible Watermark Removal via Self-calibrated Localization and
Background Refinement [21.632823897244037]
Superimposing visible watermarks on images provides a powerful weapon to cope with the copyright issue.
Modern watermark removal methods perform watermark localization and background restoration simultaneously.
We propose a two-stage multi-task network to address the above issues.
arXiv Detail & Related papers (2021-08-08T06:43:55Z) - Split then Refine: Stacked Attention-guided ResUNets for Blind Single
Image Visible Watermark Removal [69.92767260794628]
Previous watermark removal methods require to gain the watermark location from users or train a multi-task network to recover the background indiscriminately.
We propose a novel two-stage framework with a stacked attention-guided ResUNets to simulate the process of detection, removal and refinement.
We extensively evaluate our algorithm over four different datasets under various settings and the experiments show that our approach outperforms other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-13T09:05:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.