OmniGuard: Hybrid Manipulation Localization via Augmented Versatile Deep Image Watermarking
- URL: http://arxiv.org/abs/2412.01615v1
- Date: Mon, 02 Dec 2024 15:38:44 GMT
- Title: OmniGuard: Hybrid Manipulation Localization via Augmented Versatile Deep Image Watermarking
- Authors: Xuanyu Zhang, Zecheng Tang, Zhipei Xu, Runyi Li, Youmin Xu, Bin Chen, Feng Gao, Jian Zhang,
- Abstract summary: Existing versatile watermarking approaches suffer from trade-offs between tamper localization precision and visual quality.
We propose OmniGuard, a novel augmented versatile watermarking approach that integrates proactive embedding with passive, blind extraction.
Our method outperforms it by 4.25dB in PSNR of the container image, 20.7% in F1-Score under noisy conditions, and 14.8% in average bit accuracy.
- Score: 20.662260046296897
- License:
- Abstract: With the rapid growth of generative AI and its widespread application in image editing, new risks have emerged regarding the authenticity and integrity of digital content. Existing versatile watermarking approaches suffer from trade-offs between tamper localization precision and visual quality. Constrained by the limited flexibility of previous framework, their localized watermark must remain fixed across all images. Under AIGC-editing, their copyright extraction accuracy is also unsatisfactory. To address these challenges, we propose OmniGuard, a novel augmented versatile watermarking approach that integrates proactive embedding with passive, blind extraction for robust copyright protection and tamper localization. OmniGuard employs a hybrid forensic framework that enables flexible localization watermark selection and introduces a degradation-aware tamper extraction network for precise localization under challenging conditions. Additionally, a lightweight AIGC-editing simulation layer is designed to enhance robustness across global and local editing. Extensive experiments show that OmniGuard achieves superior fidelity, robustness, and flexibility. Compared to the recent state-of-the-art approach EditGuard, our method outperforms it by 4.25dB in PSNR of the container image, 20.7% in F1-Score under noisy conditions, and 14.8% in average bit accuracy.
Related papers
- SWA-LDM: Toward Stealthy Watermarks for Latent Diffusion Models [11.906245347904289]
We introduce SWA-LDM, a novel approach that enhances watermarking by randomizing the embedding process.
Our proposed watermark presence attack reveals the inherent vulnerabilities of existing latent-based watermarking methods.
This work represents a pivotal step towards securing LDM-generated images against unauthorized use.
arXiv Detail & Related papers (2025-02-14T16:55:45Z) - SuperMark: Robust and Training-free Image Watermarking via Diffusion-based Super-Resolution [27.345134138673945]
We propose SuperMark, a robust, training-free watermarking framework.
SuperMark embeds the watermark into initial Gaussian noise using existing techniques.
It then applies pre-trained Super-Resolution models to denoise the watermarked noise, producing the final watermarked image.
For extraction, the process is reversed: the watermarked image is inverted back to the initial watermarked noise via DDIM Inversion, from which the embedded watermark is extracted.
Experiments demonstrate that SuperMark achieves fidelity comparable to existing methods while significantly improving robustness.
arXiv Detail & Related papers (2024-12-13T11:20:59Z) - Towards Effective User Attribution for Latent Diffusion Models via Watermark-Informed Blending [54.26862913139299]
We introduce a novel framework Towards Effective user Attribution for latent diffusion models via Watermark-Informed Blending (TEAWIB)
TEAWIB incorporates a unique ready-to-use configuration approach that allows seamless integration of user-specific watermarks into generative models.
Experiments validate the effectiveness of TEAWIB, showcasing the state-of-the-art performance in perceptual quality and attribution accuracy.
arXiv Detail & Related papers (2024-09-17T07:52:09Z) - Safe-SD: Safe and Traceable Stable Diffusion with Text Prompt Trigger for Invisible Generative Watermarking [20.320229647850017]
Stable diffusion (SD) models have typically flourished in the field of image synthesis and personalized editing.
The exposure of AI-created content on public platforms could raise both legal and ethical risks.
In this work, we propose a Safe and high-traceable Stable Diffusion framework (namely SafeSD) to adaptive implant the watermarks into the imperceptible structure.
arXiv Detail & Related papers (2024-07-18T05:53:17Z) - Certifiably Robust Image Watermark [57.546016845801134]
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns.
Watermarking AI-generated content is a key technology to address these concerns.
We propose the first image watermarks with certified robustness guarantees against removal and forgery attacks.
arXiv Detail & Related papers (2024-07-04T17:56:04Z) - JIGMARK: A Black-Box Approach for Enhancing Image Watermarks against Diffusion Model Edits [76.25962336540226]
JIGMARK is a first-of-its-kind watermarking technique that enhances robustness through contrastive learning.
Our evaluation reveals that JIGMARK significantly surpasses existing watermarking solutions in resilience to diffusion-model edits.
arXiv Detail & Related papers (2024-06-06T03:31:41Z) - Diffusion-Based Hierarchical Image Steganography [60.69791384893602]
Hierarchical Image Steganography is a novel method that enhances the security and capacity of embedding multiple images into a single container.
It exploits the robustness of the Diffusion Model alongside the reversibility of the Flow Model.
The innovative structure can autonomously generate a container image, thereby securely and efficiently concealing multiple images and text.
arXiv Detail & Related papers (2024-05-19T11:29:52Z) - EditGuard: Versatile Image Watermarking for Tamper Localization and
Copyright Protection [19.140822655858873]
We propose a proactive forensics framework EditGuard to unify copyright protection and tamper-agnostic localization.
It can offer a meticulous embedding of imperceptible watermarks and precise decoding of tampered areas and copyright information.
Our experiments demonstrate that EditGuard balances the tamper localization accuracy, copyright recovery precision, and generalizability to various AIGC-based tampering methods.
arXiv Detail & Related papers (2023-12-12T15:41:24Z) - FT-Shield: A Watermark Against Unauthorized Fine-tuning in Text-to-Image Diffusion Models [64.89896692649589]
We propose FT-Shield, a watermarking solution tailored for the fine-tuning of text-to-image diffusion models.
FT-Shield addresses copyright protection challenges by designing new watermark generation and detection strategies.
arXiv Detail & Related papers (2023-10-03T19:50:08Z) - WMFormer++: Nested Transformer for Visible Watermark Removal via Implict
Joint Learning [68.00975867932331]
Existing watermark removal methods mainly rely on UNet with task-specific decoder branches.
We introduce an implicit joint learning paradigm to holistically integrate information from both branches.
The results demonstrate our approach's remarkable superiority, surpassing existing state-of-the-art methods by a large margin.
arXiv Detail & Related papers (2023-08-20T07:56:34Z) - WSMN: An optimized multipurpose blind watermarking in Shearlet domain
using MLP and NSGA-II [8.526086056172272]
This paper presents an optimized multipurpose blind watermarking in Shearlet domain with the help of smart algorithms including NSGA-II.
In this method, four copies of the robust copyright logo are embedded in the approximate coefficients of Shearlet.
An embedded random embedding sequence as a semi-fragile authentication mark is effectively extracted from details by the neural network.
arXiv Detail & Related papers (2020-05-07T11:14:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.