GenPTW: In-Generation Image Watermarking for Provenance Tracing and Tamper Localization
- URL: http://arxiv.org/abs/2504.19567v1
- Date: Mon, 28 Apr 2025 08:21:39 GMT
- Title: GenPTW: In-Generation Image Watermarking for Provenance Tracing and Tamper Localization
- Authors: Zhenliang Gan, Chunya Liu, Yichao Tang, Binghao Wang, Weiqiang Wang, Xinpeng Zhang,
- Abstract summary: GenPTW is an In-Generation image watermarking framework for latent diffusion models (LDMs)<n>It embeds structured watermark signals during the image generation phase, enabling unified provenance tracing and tamper localization.<n>Experiments demonstrate that GenPTW outperforms existing methods in image fidelity, watermark extraction accuracy, and tamper localization performance.
- Score: 32.843425702098116
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The rapid development of generative image models has brought tremendous opportunities to AI-generated content (AIGC) creation, while also introducing critical challenges in ensuring content authenticity and copyright ownership. Existing image watermarking methods, though partially effective, often rely on post-processing or reference images, and struggle to balance fidelity, robustness, and tamper localization. To address these limitations, we propose GenPTW, an In-Generation image watermarking framework for latent diffusion models (LDMs), which integrates Provenance Tracing and Tamper Localization into a unified Watermark-based design. It embeds structured watermark signals during the image generation phase, enabling unified provenance tracing and tamper localization. For extraction, we construct a frequency-coordinated decoder to improve robustness and localization precision in complex editing scenarios. Additionally, a distortion layer that simulates AIGC editing is introduced to enhance robustness. Extensive experiments demonstrate that GenPTW outperforms existing methods in image fidelity, watermark extraction accuracy, and tamper localization performance, offering an efficient and practical solution for trustworthy AIGC image generation.
Related papers
- TriniMark: A Robust Generative Speech Watermarking Method for Trinity-Level Attribution [3.1682080884953736]
We propose a generative textbfspeech wattextbfermarking method (TriniMark) for authenticating the generated content.
We first design a structure-lightweight watermark encoder that embeds watermarks into the time-domain features of speech.
A temporal-aware gated convolutional network is meticulously designed in the watermark decoder for bit-wise watermark recovery.
arXiv Detail & Related papers (2025-04-29T08:23:28Z) - Gaussian Shading++: Rethinking the Realistic Deployment Challenge of Performance-Lossless Image Watermark for Diffusion Models [66.54457339638004]
Copyright protection and inappropriate content generation pose challenges for the practical implementation of diffusion models.<n>We propose a diffusion model watermarking method tailored for real-world deployment.<n>Gaussian Shading++ not only maintains performance losslessness but also outperforms existing methods in terms of robustness.
arXiv Detail & Related papers (2025-04-21T11:18:16Z) - TraceMark-LDM: Authenticatable Watermarking for Latent Diffusion Models via Binary-Guided Rearrangement [21.94988216476109]
We introduce TraceMark-LDM, an algorithm that integrates watermarking to attribute generated images while guaranteeing non-destructive performance.<n>Images synthesized using TraceMark-LDM exhibit superior quality and attribution accuracy compared to state-of-the-art (SOTA) techniques.
arXiv Detail & Related papers (2025-03-30T06:23:53Z) - Safe-VAR: Safe Visual Autoregressive Model for Text-to-Image Generative Watermarking [18.251123923955397]
Autoregressive learning has become a dominant approach for text-to-image generation, offering high efficiency and visual quality.<n>Existing watermarking methods, designed for diffusion models, often struggle to adapt to the sequential nature of VAR models.<n>We propose Safe- VAR, the first watermarking framework specifically designed for autoregressive text-to-image generation.
arXiv Detail & Related papers (2025-03-14T11:45:10Z) - Lost in Edits? A $λ$-Compass for AIGC Provenance [119.95562081325552]
We propose a novel latent-space attribution method that robustly identifies and differentiates authentic outputs from manipulated ones.<n>LambdaTracer is effective across diverse iterative editing processes, whether automated through text-guided editing tools such as InstructPix2Pix or performed manually with editing software such as Adobe Photoshop.
arXiv Detail & Related papers (2025-02-05T06:24:25Z) - OmniGuard: Hybrid Manipulation Localization via Augmented Versatile Deep Image Watermarking [20.662260046296897]
Existing versatile watermarking approaches suffer from trade-offs between tamper localization precision and visual quality.<n>We propose OmniGuard, a novel augmented versatile watermarking approach that integrates proactive embedding with passive, blind extraction.<n>Our method outperforms it by 4.25dB in PSNR of the container image, 20.7% in F1-Score under noisy conditions, and 14.8% in average bit accuracy.
arXiv Detail & Related papers (2024-12-02T15:38:44Z) - Uniform Attention Maps: Boosting Image Fidelity in Reconstruction and Editing [66.48853049746123]
We analyze reconstruction from a structural perspective and propose a novel approach that replaces traditional cross-attention with uniform attention maps.
Our method effectively minimizes distortions caused by varying text conditions during noise prediction.
Experimental results demonstrate that our approach not only excels in achieving high-fidelity image reconstruction but also performs robustly in real image composition and editing scenarios.
arXiv Detail & Related papers (2024-11-29T12:11:28Z) - Towards Effective User Attribution for Latent Diffusion Models via Watermark-Informed Blending [54.26862913139299]
We introduce a novel framework Towards Effective user Attribution for latent diffusion models via Watermark-Informed Blending (TEAWIB)<n> TEAWIB incorporates a unique ready-to-use configuration approach that allows seamless integration of user-specific watermarks into generative models.<n>Experiments validate the effectiveness of TEAWIB, showcasing the state-of-the-art performance in perceptual quality and attribution accuracy.
arXiv Detail & Related papers (2024-09-17T07:52:09Z) - Safe-SD: Safe and Traceable Stable Diffusion with Text Prompt Trigger for Invisible Generative Watermarking [20.320229647850017]
Stable diffusion (SD) models have typically flourished in the field of image synthesis and personalized editing.
The exposure of AI-created content on public platforms could raise both legal and ethical risks.
In this work, we propose a Safe and high-traceable Stable Diffusion framework (namely SafeSD) to adaptive implant the watermarks into the imperceptible structure.
arXiv Detail & Related papers (2024-07-18T05:53:17Z) - Coherent and Multi-modality Image Inpainting via Latent Space Optimization [61.99406669027195]
PILOT (intextbfPainting vtextbfIa textbfLatent textbfOptextbfTimization) is an optimization approach grounded on a novel textitsemantic centralization and textitbackground preservation loss.
Our method searches latent spaces capable of generating inpainted regions that exhibit high fidelity to user-provided prompts while maintaining coherence with the background.
arXiv Detail & Related papers (2024-07-10T19:58:04Z) - Active Generation for Image Classification [45.93535669217115]
We propose to address the efficiency of image generation by focusing on the specific needs and characteristics of the model.
With a central tenet of active learning, our method, named ActGen, takes a training-aware approach to image generation.
arXiv Detail & Related papers (2024-03-11T08:45:31Z) - Watermarking Images in Self-Supervised Latent Spaces [75.99287942537138]
We revisit watermarking techniques based on pre-trained deep networks, in the light of self-supervised approaches.
We present a way to embed both marks and binary messages into their latent spaces, leveraging data augmentation at marking time.
arXiv Detail & Related papers (2021-12-17T15:52:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.