Adoption of Watermarking Measures for AI-Generated Content and Implications under the EU AI Act
- URL: http://arxiv.org/abs/2503.18156v2
- Date: Tue, 03 Jun 2025 16:23:43 GMT
- Title: Adoption of Watermarking Measures for AI-Generated Content and Implications under the EU AI Act
- Authors: Bram Rijsbosch, Gijs van Dijck, Konrad Kollnig,
- Abstract summary: This paper provides an empirical analysis of 50 widely used AI systems for image generation, embedded into a legal analysis of the AI Act.<n>We find that only a minority of AI image generators currently implement adequate watermarking and deep fake labelling.
- Score: 4.2125200966193885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: AI-generated images have become so good in recent years that individuals often cannot distinguish them any more from "real" images. This development, combined with the rapid spread of AI-generated content online, creates a series of societal risks, particularly with the emergence of "deep fakes" that impersonate real individuals. Watermarking, a technique that involves embedding information within images and other content to indicate their AI-generated nature, has emerged as a primary mechanism to address the risks posed by AI-generated content. Indeed, watermarking and AI labelling measures are now becoming a legal requirement in many jurisdictions, including under the 2024 European Union AI Act. Despite the widespread use of AI image generation systems, the current status of the implementation of such measures remains largely unexamined. Moreover, the practical implications of the AI Act's watermarking and labelling requirements have not previously been studied. The present paper therefore both provides an empirical analysis of 50 widely used AI systems for image generation, embedded into a legal analysis of the AI Act. In our legal analysis, we identify four categories of generative AI image deployment scenarios relevant under the AI Act and outline how the legal obligations apply in each category. In our empirical analysis, we find that only a minority number of AI image generators currently implement adequate watermarking (38%) and deep fake labelling (8%) practices. In response, we suggest a range of avenues of how the implementation of these legally mandated techniques can be improved, and publicly share our tooling for the easy detection of watermarks in images.
Related papers
- On-Device Watermarking: A Socio-Technical Imperative For Authenticity In The Age of Generative AI [0.0]
We argue that we are adopting the wrong approach, and should instead focus on watermarking via cryptographic signatures.
For audio-visual content, in particular, all real content is grounded in the physical world and captured via hardware sensors.
arXiv Detail & Related papers (2025-04-15T20:36:52Z) - Could AI Trace and Explain the Origins of AI-Generated Images and Text? [53.11173194293537]
AI-generated content is increasingly prevalent in the real world.
adversaries might exploit large multimodal models to create images that violate ethical or legal standards.
Paper reviewers may misuse large language models to generate reviews without genuine intellectual effort.
arXiv Detail & Related papers (2025-04-05T20:51:54Z) - Watermarking across Modalities for Content Tracing and Generative AI [2.456311843339488]
This thesis includes the development of new watermarking techniques for images, audio, and text.<n>We first introduce methods for active moderation of images on social platforms.<n>We then develop specific techniques for AI-generated content.
arXiv Detail & Related papers (2025-02-04T18:49:50Z) - AI-generated Image Quality Assessment in Visual Communication [72.11144790293086]
AIGI-VC is a quality assessment database for AI-generated images in visual communication.<n>The dataset consists of 2,500 images spanning 14 advertisement topics and 8 emotion types.<n>It provides coarse-grained human preference annotations and fine-grained preference descriptions, benchmarking the abilities of IQA methods in preference prediction, interpretation, and reasoning.
arXiv Detail & Related papers (2024-12-20T08:47:07Z) - SoK: Watermarking for AI-Generated Content [112.9218881276487]
Watermarking schemes embed hidden signals within AI-generated content to enable reliable detection.<n>Watermarks can play a crucial role in enhancing AI safety and trustworthiness by combating misinformation and deception.<n>This work aims to guide researchers in advancing watermarking methods and applications, and support policymakers in addressing the broader implications of GenAI.
arXiv Detail & Related papers (2024-11-27T16:22:33Z) - SoK: On the Role and Future of AIGC Watermarking in the Era of Gen-AI [24.187726079290357]
AIGC watermarks offer an effective solution to mitigate malicious activities.
We provide a taxonomy based on the core properties of the watermark.
We discuss the functionality and security threats of AIGC watermarking.
arXiv Detail & Related papers (2024-11-18T11:26:42Z) - Certifiably Robust Image Watermark [57.546016845801134]
Generative AI raises many societal concerns such as boosting disinformation and propaganda campaigns.
Watermarking AI-generated content is a key technology to address these concerns.
We propose the first image watermarks with certified robustness guarantees against removal and forgery attacks.
arXiv Detail & Related papers (2024-07-04T17:56:04Z) - Protect-Your-IP: Scalable Source-Tracing and Attribution against Personalized Generation [19.250673262185767]
We propose a unified approach for image copyright source-tracing and attribution.
We introduce an innovative watermarking-attribution method that blends proactive and passive strategies.
We have conducted experiments using various celebrity portrait series sourced online.
arXiv Detail & Related papers (2024-05-26T15:14:54Z) - The Adversarial AI-Art: Understanding, Generation, Detection, and Benchmarking [47.08666835021915]
We present a systematic attempt at understanding and detecting AI-generated images (AI-art) in adversarial scenarios.
The dataset, named ARIA, contains over 140K images in five categories: artworks (painting), social media images, news photos, disaster scenes, and anime pictures.
arXiv Detail & Related papers (2024-04-22T21:00:13Z) - AIGCOIQA2024: Perceptual Quality Assessment of AI Generated Omnidirectional Images [70.42666704072964]
We establish a large-scale AI generated omnidirectional image IQA database named AIGCOIQA2024.
A subjective IQA experiment is conducted to assess human visual preferences from three perspectives.
We conduct a benchmark experiment to evaluate the performance of state-of-the-art IQA models on our database.
arXiv Detail & Related papers (2024-04-01T10:08:23Z) - CopyScope: Model-level Copyright Infringement Quantification in the
Diffusion Workflow [6.6282087165087304]
Copyright infringement quantification is the primary and challenging step towards AI-generated image copyright traceability.
We propose CopyScope, a new framework to quantify the infringement of AI-generated images from the model level.
arXiv Detail & Related papers (2023-10-13T13:08:09Z) - DeepfakeArt Challenge: A Benchmark Dataset for Generative AI Art Forgery and Data Poisoning Detection [57.51313366337142]
There has been growing concern over the use of generative AI for malicious purposes.
In the realm of visual content synthesis using generative AI, key areas of significant concern has been image forgery and data poisoning.
We introduce the DeepfakeArt Challenge, a large-scale challenge benchmark dataset designed specifically to aid in the building of machine learning algorithms for generative AI art forgery and data poisoning detection.
arXiv Detail & Related papers (2023-06-02T05:11:27Z) - Evading Watermark based Detection of AI-Generated Content [45.47476727209842]
A generative AI model can generate extremely realistic-looking content.
Watermark has been leveraged to detect AI-generated content.
A content is detected as AI-generated if a similar watermark can be decoded from it.
arXiv Detail & Related papers (2023-05-05T19:20:29Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.