Learning Generalizable and Efficient Image Watermarking via Hierarchical Two-Stage Optimization
- URL: http://arxiv.org/abs/2508.08667v1
- Date: Tue, 12 Aug 2025 06:21:27 GMT
- Title: Learning Generalizable and Efficient Image Watermarking via Hierarchical Two-Stage Optimization
- Authors: Ke Liu, Xuanhan Wang, Qilong Zhang, Lianli Gao, Jingkuan Song,
- Abstract summary: We propose a two-stage optimization that enable a watermarking model to simultaneously achieve three criteria.<n>HiWL effectively learns generalizable latent-space watermark representations while maintaining broad applicability.<n>It achieves 7.6% higher accuracy in watermark extraction than existing methods, while maintaining extremely low latency (100K images processed in 8s)
- Score: 90.13049455759358
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Deep image watermarking, which refers to enable imperceptible watermark embedding and reliable extraction in cover images, has shown to be effective for copyright protection of image assets. However, existing methods face limitations in simultaneously satisfying three essential criteria for generalizable watermarking: 1) invisibility (imperceptible hide of watermarks), 2) robustness (reliable watermark recovery under diverse conditions), and 3) broad applicability (low latency in watermarking process). To address these limitations, we propose a Hierarchical Watermark Learning (HiWL), a two-stage optimization that enable a watermarking model to simultaneously achieve three criteria. In the first stage, distribution alignment learning is designed to establish a common latent space with two constraints: 1) visual consistency between watermarked and non-watermarked images, and 2) information invariance across watermark latent representations. In this way, multi-modal inputs including watermark message (binary codes) and cover images (RGB pixels) can be well represented, ensuring the invisibility of watermarks and robustness in watermarking process thereby. The second stage employs generalized watermark representation learning to establish a disentanglement policy for separating watermarks from image content in RGB space. In particular, it strongly penalizes substantial fluctuations in separated RGB watermarks corresponding to identical messages. Consequently, HiWL effectively learns generalizable latent-space watermark representations while maintaining broad applicability. Extensive experiments demonstrate the effectiveness of proposed method. In particular, it achieves 7.6\% higher accuracy in watermark extraction than existing methods, while maintaining extremely low latency (100K images processed in 8s).
Related papers
- Learning to Watermark: A Selective Watermarking Framework for Large Language Models via Multi-Objective Optimization [17.15048594237333]
Existing watermarking techniques often face trade-off between watermark detectability and generated text quality.<n>In this paper, we introduce Learning to Watermark (LTW), a novel selective watermarking framework.
arXiv Detail & Related papers (2025-10-13T01:07:38Z) - PT-Mark: Invisible Watermarking for Text-to-image Diffusion Models via Semantic-aware Pivotal Tuning [19.170393134039568]
We present Semantic-aware Pivotal Tuning Watermarks (PT-Mark)<n>PT-Mark preserves both the semantics of diffusion images and the traceability of the watermark.<n>It achieves a 10% improvement in the performance of semantic preservation compared to state-of-the-art watermarking methods.
arXiv Detail & Related papers (2025-04-15T04:25:57Z) - On the Coexistence and Ensembling of Watermarks [93.15379331904602]
We find that various open-source watermarks can coexist with only minor impacts on image quality and decoding robustness.<n>We show how ensembling can increase the overall message capacity and enable new trade-offs between capacity, accuracy, robustness and image quality, without needing to retrain the base models.
arXiv Detail & Related papers (2025-01-29T00:37:06Z) - ROBIN: Robust and Invisible Watermarks for Diffusion Models with Adversarial Optimization [15.570148419846175]
Existing watermarking methods face the challenge of balancing robustness and concealment.<n>This paper introduces a watermark hiding process to actively achieve concealment, thus allowing the embedding of stronger watermarks.<n> Experiments on various diffusion models demonstrate the watermark remains verifiable even under significant image tampering.
arXiv Detail & Related papers (2024-11-06T12:14:23Z) - RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images with Provable Guarantees [33.61946642460661]
This paper introduces a robust and agile watermark detection framework, dubbed as RAW.
We employ a classifier that is jointly trained with the watermark to detect the presence of the watermark.
We show that the framework provides provable guarantees regarding the false positive rate for misclassifying a watermarked image.
arXiv Detail & Related papers (2024-01-23T22:00:49Z) - T2IW: Joint Text to Image & Watermark Generation [74.20148555503127]
We introduce a novel task for the joint generation of text to image and watermark (T2IW)
This T2IW scheme ensures minimal damage to image quality when generating a compound image by forcing the semantic feature and the watermark signal to be compatible in pixels.
We demonstrate remarkable achievements in image quality, watermark invisibility, and watermark robustness, supported by our proposed set of evaluation metrics.
arXiv Detail & Related papers (2023-09-07T16:12:06Z) - Adaptive Blind Watermarking Using Psychovisual Image Features [8.75217589103206]
This paper proposes an adaptive method that determines the strength of the watermark embedding in different parts of the cover image.
Experimental results also show that the proposed method can effectively reconstruct the embedded payload in different kinds of common watermarking attacks.
arXiv Detail & Related papers (2022-12-25T06:33:36Z) - Certified Neural Network Watermarks with Randomized Smoothing [64.86178395240469]
We propose a certifiable watermarking method for deep learning models.
We show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain l2 threshold.
Our watermark is also empirically more robust compared to previous watermarking methods.
arXiv Detail & Related papers (2022-07-16T16:06:59Z) - Watermarking Images in Self-Supervised Latent Spaces [75.99287942537138]
We revisit watermarking techniques based on pre-trained deep networks, in the light of self-supervised approaches.
We present a way to embed both marks and binary messages into their latent spaces, leveraging data augmentation at marking time.
arXiv Detail & Related papers (2021-12-17T15:52:46Z) - Split then Refine: Stacked Attention-guided ResUNets for Blind Single
Image Visible Watermark Removal [69.92767260794628]
Previous watermark removal methods require to gain the watermark location from users or train a multi-task network to recover the background indiscriminately.
We propose a novel two-stage framework with a stacked attention-guided ResUNets to simulate the process of detection, removal and refinement.
We extensively evaluate our algorithm over four different datasets under various settings and the experiments show that our approach outperforms other state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2020-12-13T09:05:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.