Secure and Robust Watermarking for AI-generated Images: A Comprehensive Survey
- URL: http://arxiv.org/abs/2510.02384v1
- Date: Tue, 30 Sep 2025 18:59:05 GMT
- Title: Secure and Robust Watermarking for AI-generated Images: A Comprehensive Survey
- Authors: Jie Cao, Qi Li, Zelin Zhang, Jianbing Ni,
- Abstract summary: The rapid advancement of generative artificial intelligence (Gen-AI) has facilitated the effortless creation of high-quality images.<n> Watermarking has emerged as a promising solution to these challenges by distinguishing AI-generated images from natural content.<n>The survey aims to equip researchers with a holistic understanding of AI-generated image watermarking technologies.
- Score: 15.570983503312227
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The rapid advancement of generative artificial intelligence (Gen-AI) has facilitated the effortless creation of high-quality images, while simultaneously raising critical concerns regarding intellectual property protection, authenticity, and accountability. Watermarking has emerged as a promising solution to these challenges by distinguishing AI-generated images from natural content, ensuring provenance, and fostering trustworthy digital ecosystems. This paper presents a comprehensive survey of the current state of AI-generated image watermarking, addressing five key dimensions: (1) formalization of image watermarking systems; (2) an overview and comparison of diverse watermarking techniques; (3) evaluation methodologies with respect to visual quality, capacity, and detectability; (4) vulnerabilities to malicious attacks; and (5) prevailing challenges and future directions. The survey aims to equip researchers with a holistic understanding of AI-generated image watermarking technologies, thereby promoting their continued development.
Related papers
- IConMark: Robust Interpretable Concept-Based Watermark For AI Images [50.045011844765185]
We propose IConMark, a novel in-generation robust semantic watermarking method.<n>IConMark embeds interpretable concepts into AI-generated images, making it resilient to adversarial manipulation.<n>We demonstrate its superiority in terms of detection accuracy and maintaining image quality.
arXiv Detail & Related papers (2025-07-17T05:38:30Z) - RAID: A Dataset for Testing the Adversarial Robustness of AI-Generated Image Detectors [57.81012948133832]
We present RAID (Robust evaluation of AI-generated image Detectors), a dataset of 72k diverse and highly transferable adversarial examples.<n>Our methodology generates adversarial images that transfer with a high success rate to unseen detectors.<n>Our findings indicate that current state-of-the-art AI-generated image detectors can be easily deceived by adversarial examples.
arXiv Detail & Related papers (2025-06-04T14:16:00Z) - Visual Watermarking in the Era of Diffusion Models: Advances and Challenges [46.52694938281591]
We analyze the strengths and challenges of watermark techniques related to diffusion models.<n>We aim to advance the discourse on preserving watermark robustness against evolving forgery threats.
arXiv Detail & Related papers (2025-05-13T03:14:18Z) - Watermarking for AI Content Detection: A Review on Text, Visual, and Audio Modalities [2.3543188414616534]
generative artificial intelligence (GenAI) has revolutionized content creation across text, visual, and audio domains.<n>We develop a structured taxonomy categorizing watermarking methods for text, visual, and audio modalities.<n>We identify key challenges, including resistance to adversarial attacks, lack of standardization across different content types, and ethical considerations related to privacy and content ownership.
arXiv Detail & Related papers (2025-04-02T15:18:10Z) - Watermarking across Modalities for Content Tracing and Generative AI [2.456311843339488]
This thesis includes the development of new watermarking techniques for images, audio, and text.<n>We first introduce methods for active moderation of images on social platforms.<n>We then develop specific techniques for AI-generated content.
arXiv Detail & Related papers (2025-02-04T18:49:50Z) - SoK: Watermarking for AI-Generated Content [112.9218881276487]
Watermarking schemes embed hidden signals within AI-generated content to enable reliable detection.<n>Watermarks can play a crucial role in enhancing AI safety and trustworthiness by combating misinformation and deception.<n>This work aims to guide researchers in advancing watermarking methods and applications, and support policymakers in addressing the broader implications of GenAI.
arXiv Detail & Related papers (2024-11-27T16:22:33Z) - Certifiably Robust Image Watermark [57.546016845801134]
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns.
Watermarking AI-generated content is a key technology to address these concerns.
We propose the first image watermarks with certified robustness guarantees against removal and forgery attacks.
arXiv Detail & Related papers (2024-07-04T17:56:04Z) - DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning Detection [57.51313366337142]
There has been growing concern over the use of generative AI for malicious purposes.
In the realm of visual content synthesis using generative AI, key areas of significant concern has been image forgery and data poisoning.
We introduce the DeepfakeArt Challenge, a large-scale challenge benchmark dataset designed specifically to aid in the building of machine learning algorithms for generative AI art forgery and data poisoning detection.
arXiv Detail & Related papers (2023-06-02T05:11:27Z) - Evading Watermark based Detection of AI-Generated Content [45.47476727209842]
A generative AI model can generate extremely realistic-looking content.
Watermark has been leveraged to detect AI-generated content.
A content is detected as AI-generated if a similar watermark can be decoded from it.
arXiv Detail & Related papers (2023-05-05T19:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.