Provenance Verification of AI-Generated Images via a Perceptual Hash Registry Anchored on Blockchain
- URL: http://arxiv.org/abs/2602.02412v1
- Date: Mon, 02 Feb 2026 18:13:09 GMT
- Title: Provenance Verification of AI-Generated Images via a Perceptual Hash Registry Anchored on Blockchain
- Authors: Apoorv Mohit, Bhavya Aggarwal, Chinmay Gondhalekar,
- Abstract summary: This paper proposes a blockchain-backed framework for verifying AI-generated images through a registry-based provenance mechanism.<n>The proposed system does not aim to universally detect all synthetic images, but instead focuses on verifying the provenance of AI-generated content that has been registered at creation time.
- Score: 0.0
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: The rapid advancement of artificial intelligence has made the generation of synthetic images widely accessible, increasing concerns related to misinformation, digital forgery, and content authenticity on large-scale online platforms. This paper proposes a blockchain-backed framework for verifying AI-generated images through a registry-based provenance mechanism. Each AI-generated image is assigned a digital fingerprint that preserves similarity using perceptual hashing and is registered at creation time by participating generation platforms. The hashes are stored on a hybrid on-chain/off-chain public blockchain using a Merkle Patricia Trie for tamper-resistant storage (on-chain) and a Burkhard-Keller tree (off-chain) to enable efficient similarity search over large image registries. Verification is performed when images are re-uploaded to digital platforms such as social media services, enabling identification of previously registered AI-generated images even after benign transformations or partial modifications. The proposed system does not aim to universally detect all synthetic images, but instead focuses on verifying the provenance of AI-generated content that has been registered at creation time. By design, this approach complements existing watermarking and learning-based detection methods, providing a platform-agnostic, tamper-proof mechanism for scalable content provenance and authenticity verification at the point of large-scale online distribution.
Related papers
- Adapter Shield: A Unified Framework with Built-in Authentication for Preventing Unauthorized Zero-Shot Image-to-Image Generation [74.5813283875938]
Zero-shot image-to-image generation poses substantial risks related to intellectual property violations.<n>This work presents Adapter Shield, the first universal and authentication-integrated solution aimed at defending personal images from misuse.<n>Our method surpasses existing state-of-the-art defenses in blocking unauthorized zero-shot image synthesis.
arXiv Detail & Related papers (2025-11-25T04:49:16Z) - Provenance of AI-Generated Images: A Vector Similarity and Blockchain-based Approach [3.632189127068905]
We propose an embedding-based AI image detection framework to distinguish AI-generated images from real (human-created) ones.<n>Our methodology is built on the hypothesis that AI-generated images demonstrate closer embedding proximity to other AI-generated content.<n>Our results confirm that moderate to high perturbations minimally impact the embedding signatures, with perturbed images maintaining close similarity matches to their original versions.
arXiv Detail & Related papers (2025-10-15T00:49:56Z) - A Watermark for Auto-Regressive Image Generation Models [50.599325258178254]
We propose C-reweight, a distortion-free watermarking method explicitly designed for image generation models.<n>C-reweight mitigates retokenization mismatch while preserving image fidelity.
arXiv Detail & Related papers (2025-06-13T00:15:54Z) - Provenance Detection for AI-Generated Images: Combining Perceptual Hashing, Homomorphic Encryption, and AI Detection Models [0.0]
We develop a framework for secure, transformation-resilient AI content detection.<n>We develop an adversarially robust state-of-the-art perceptual hashing model, DinoHash.<n>We integrate a Multi-Party Fully Homomorphic Encryption(MP-FHE) scheme into our proposed framework to ensure the protection of both user queries and registry privacy.
arXiv Detail & Related papers (2025-03-14T08:42:18Z) - Safe-SD: Safe and Traceable Stable Diffusion with Text Prompt Trigger for Invisible Generative Watermarking [20.320229647850017]
Stable diffusion (SD) models have typically flourished in the field of image synthesis and personalized editing.
The exposure of AI-created content on public platforms could raise both legal and ethical risks.
In this work, we propose a Safe and high-traceable Stable Diffusion framework (namely SafeSD) to adaptive implant the watermarks into the imperceptible structure.
arXiv Detail & Related papers (2024-07-18T05:53:17Z) - Solutions to Deepfakes: Can Camera Hardware, Cryptography, and Deep Learning Verify Real Images? [51.3344199560726]
It is imperative to establish methods that can separate real data from synthetic data with high confidence.
This document aims to: present known strategies in detection and cryptography that can be employed to verify which images are real.
arXiv Detail & Related papers (2024-07-04T22:01:21Z) - Unrecognizable Yet Identifiable: Image Distortion with Preserved Embeddings [22.338328674283062]
We introduce an innovative image transformation technique that renders facial images unrecognizable to the eye while maintaining their identifiability by neural network models.
The proposed methodology can be used in various artificial intelligence applications to distort the visual data and keep the derived features close.
We show that it is possible to build the distortion that changes the image content by more than 70% while maintaining the same recognition accuracy.
arXiv Detail & Related papers (2024-01-26T18:20:53Z) - RAW: A Robust and Agile Plug-and-Play Watermark Framework for AI-Generated Images with Provable Guarantees [33.61946642460661]
This paper introduces a robust and agile watermark detection framework, dubbed as RAW.
We employ a classifier that is jointly trained with the watermark to detect the presence of the watermark.
We show that the framework provides provable guarantees regarding the false positive rate for misclassifying a watermarked image.
arXiv Detail & Related papers (2024-01-23T22:00:49Z) - Catch You Everything Everywhere: Guarding Textual Inversion via Concept Watermarking [67.60174799881597]
We propose the novel concept watermarking, where watermark information is embedded into the target concept and then extracted from generated images based on the watermarked concept.
In practice, the concept owner can upload his concept with different watermarks (ie, serial numbers) to the platform, and the platform allocates different users with different serial numbers for subsequent tracing and forensics.
arXiv Detail & Related papers (2023-09-12T03:33:13Z) - Human-imperceptible, Machine-recognizable Images [76.01951148048603]
A major conflict is exposed relating to software engineers between better developing AI systems and distancing from the sensitive training data.
This paper proposes an efficient privacy-preserving learning paradigm, where images are encrypted to become human-imperceptible, machine-recognizable''
We show that the proposed paradigm can ensure the encrypted images have become human-imperceptible while preserving machine-recognizable information.
arXiv Detail & Related papers (2023-06-06T13:41:37Z) - Reversible Watermarking in Deep Convolutional Neural Networks for
Integrity Authentication [78.165255859254]
We propose a reversible watermarking algorithm for integrity authentication.
The influence of embedding reversible watermarking on the classification performance is less than 0.5%.
At the same time, the integrity of the model can be verified by applying the reversible watermarking.
arXiv Detail & Related papers (2021-04-09T09:32:21Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.