SEAL: Semantic Aware Image Watermarking
- URL: http://arxiv.org/abs/2503.12172v2
- Date: Wed, 09 Apr 2025 13:30:18 GMT
- Title: SEAL: Semantic Aware Image Watermarking
- Authors: Kasra Arabi, R. Teal Witter, Chinmay Hegde, Niv Cohen,
- Abstract summary: We propose a novel watermarking method that embeds semantic information about the generated image directly into the watermark.<n>The key pattern can be inferred from the semantic embedding of the image using locality-sensitive hashing.<n>Our results suggest that content-aware watermarks can mitigate risks arising from image-generative models.
- Score: 26.606008778795193
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative models have rapidly evolved to generate realistic outputs. However, their synthetic outputs increasingly challenge the clear distinction between natural and AI-generated content, necessitating robust watermarking techniques. Watermarks are typically expected to preserve the integrity of the target image, withstand removal attempts, and prevent unauthorized replication onto unrelated images. To address this need, recent methods embed persistent watermarks into images produced by diffusion models using the initial noise. Yet, to do so, they either distort the distribution of generated images or rely on searching through a long dictionary of used keys for detection. In this paper, we propose a novel watermarking method that embeds semantic information about the generated image directly into the watermark, enabling a distortion-free watermark that can be verified without requiring a database of key patterns. Instead, the key pattern can be inferred from the semantic embedding of the image using locality-sensitive hashing. Furthermore, conditioning the watermark detection on the original image content improves robustness against forgery attacks. To demonstrate that, we consider two largely overlooked attack strategies: (i) an attacker extracting the initial noise and generating a novel image with the same pattern; (ii) an attacker inserting an unrelated (potentially harmful) object into a watermarked image, possibly while preserving the watermark. We empirically validate our method's increased robustness to these attacks. Taken together, our results suggest that content-aware watermarks can mitigate risks arising from image-generative models.
Related papers
- Forging and Removing Latent-Noise Diffusion Watermarks Using a Single Image [24.513881574366046]
Previous watermarking schemes embed a secret key in the initial noise.
We propose a black-box adversarial attack without presuming access to the diffusion model weights.
We show that we can also apply a similar approach for watermark removal by learning perturbations to exit this region.
arXiv Detail & Related papers (2025-04-27T18:29:26Z) - RoboSignature: Robust Signature and Watermarking on Network Attacks [0.5461938536945723]
We present a novel adversarial fine-tuning attack that disrupts the model's ability to embed the intended watermark.<n>Our findings emphasize the importance of anticipating and defending against potential vulnerabilities in generative systems.
arXiv Detail & Related papers (2024-12-22T04:36:27Z) - Hidden in the Noise: Two-Stage Robust Watermarking for Images [25.731533630250798]
We present a distortion-free watermarking method for images based on a diffusion model's initial noise.<n> detecting the watermark requires comparing the initial noise reconstructed for an image to all previously used initial noises.<n>We propose a two-stage watermarking framework for efficient detection.
arXiv Detail & Related papers (2024-12-05T22:50:42Z) - Black-Box Forgery Attacks on Semantic Watermarks for Diffusion Models [16.57738116313139]
We show that attackers can leverage unrelated models, even with different latent spaces and architectures, to perform powerful and realistic forgery attacks.<n>The first imprints a targeted watermark into real images by manipulating the latent representation of an arbitrary image in an unrelated LDM.<n>The second attack generates new images with the target watermark by inverting a watermarked image and re-generating it with an arbitrary prompt.
arXiv Detail & Related papers (2024-12-04T12:57:17Z) - Certifiably Robust Image Watermark [57.546016845801134]
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns.
Watermarking AI-generated content is a key technology to address these concerns.
We propose the first image watermarks with certified robustness guarantees against removal and forgery attacks.
arXiv Detail & Related papers (2024-07-04T17:56:04Z) - Invisible Image Watermarks Are Provably Removable Using Generative AI [47.25747266531665]
Invisible watermarks safeguard images' copyrights by embedding hidden messages only detectable by owners.
We propose a family of regeneration attacks to remove these invisible watermarks.
The proposed attack method first adds random noise to an image to destroy the watermark and then reconstructs the image.
arXiv Detail & Related papers (2023-06-02T23:29:28Z) - Tree-Ring Watermarks: Fingerprints for Diffusion Images that are
Invisible and Robust [55.91987293510401]
Watermarking the outputs of generative models is a crucial technique for tracing copyright and preventing potential harm from AI-generated content.
We introduce a novel technique called Tree-Ring Watermarking that robustly fingerprints diffusion model outputs.
Our watermark is semantically hidden in the image space and is far more robust than watermarking alternatives that are currently deployed.
arXiv Detail & Related papers (2023-05-31T17:00:31Z) - Certified Neural Network Watermarks with Randomized Smoothing [64.86178395240469]
We propose a certifiable watermarking method for deep learning models.
We show that our watermark is guaranteed to be unremovable unless the model parameters are changed by more than a certain l2 threshold.
Our watermark is also empirically more robust compared to previous watermarking methods.
arXiv Detail & Related papers (2022-07-16T16:06:59Z) - Exploring Structure Consistency for Deep Model Watermarking [122.38456787761497]
The intellectual property (IP) of Deep neural networks (DNNs) can be easily stolen'' by surrogate model attack.
We propose a new watermarking methodology, namely structure consistency'', based on which a new deep structure-aligned model watermarking algorithm is designed.
arXiv Detail & Related papers (2021-08-05T04:27:15Z) - Fine-tuning Is Not Enough: A Simple yet Effective Watermark Removal
Attack for DNN Models [72.9364216776529]
We propose a novel watermark removal attack from a different perspective.
We design a simple yet powerful transformation algorithm by combining imperceptible pattern embedding and spatial-level transformations.
Our attack can bypass state-of-the-art watermarking solutions with very high success rates.
arXiv Detail & Related papers (2020-09-18T09:14:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.