Can Generative Models Actually Forge Realistic Identity Documents?
- URL: http://arxiv.org/abs/2601.00829v1
- Date: Thu, 25 Dec 2025 00:56:50 GMT
- Title: Can Generative Models Actually Forge Realistic Identity Documents?
- Authors: Alexander Vinogradov,
- Abstract summary: Open-source and publicly accessible generative models can produce identity document forgeries.<n>Risk of generative identity document deepfakes achieving forensic-level authenticity may be overestimated.
- Score: 51.56484100374058
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative image models have recently shown significant progress in image realism, leading to public concerns about their potential misuse for document forgery. This paper explores whether contemporary open-source and publicly accessible diffusion-based generative models can produce identity document forgeries that could realistically bypass human or automated verification systems. We evaluate text-to-image and image-to-image generation pipelines using multiple publicly available generative model families, including Stable Diffusion, Qwen, Flux, Nano-Banana, and others. The findings indicate that while current generative models can simulate surface-level document aesthetics, they fail to reproduce structural and forensic authenticity. Consequently, the risk of generative identity document deepfakes achieving forensic-level authenticity may be overestimated, underscoring the value of collaboration between machine learning practitioners and document-forensics experts in realistic risk assessment.
Related papers
- ID-Booth: Identity-consistent Face Generation with Diffusion Models [27.46650231581887]
We present a novel generative diffusion-based framework called ID-Booth.<n>The framework enables identity-consistent image generation while retaining the synthesis capabilities of pretrained diffusion models.<n>Our method facilitates better intra-identity consistency and inter-identity separability than competing methods, while achieving higher image diversity.
arXiv Detail & Related papers (2025-04-10T02:20:18Z) - FakeScope: Large Multimodal Expert Model for Transparent AI-Generated Image Forensics [66.14786900470158]
We propose FakeScope, an expert multimodal model (LMM) tailored for AI-generated image forensics.<n>FakeScope identifies AI-synthetic images with high accuracy and provides rich, interpretable, and query-driven forensic insights.<n>FakeScope achieves state-of-the-art performance in both closed-ended and open-ended forensic scenarios.
arXiv Detail & Related papers (2025-03-31T16:12:48Z) - KITTEN: A Knowledge-Intensive Evaluation of Image Generation on Visual Entities [93.74881034001312]
KITTEN is a benchmark for Knowledge-InTensive image generaTion on real-world ENtities.<n>We conduct a systematic study of the latest text-to-image models and retrieval-augmented models.<n>Analysis shows that even advanced text-to-image models fail to generate accurate visual details of entities.
arXiv Detail & Related papers (2024-10-15T17:50:37Z) - How to Trace Latent Generative Model Generated Images without Artificial Watermark? [88.04880564539836]
Concerns have arisen regarding potential misuse related to images generated by latent generative models.
We propose a latent inversion based method called LatentTracer to trace the generated images of the inspected model.
Our experiments show that our method can distinguish the images generated by the inspected model and other images with a high accuracy and efficiency.
arXiv Detail & Related papers (2024-05-22T05:33:47Z) - Unveiling the Truth: Exploring Human Gaze Patterns in Fake Images [34.02058539403381]
We leverage human semantic knowledge to investigate the possibility of being included in frameworks of fake image detection.
A preliminary statistical analysis is conducted to explore the distinctive patterns in how humans perceive genuine and altered images.
arXiv Detail & Related papers (2024-03-13T19:56:30Z) - Text-image guided Diffusion Model for generating Deepfake celebrity
interactions [50.37578424163951]
Diffusion models have recently demonstrated highly realistic visual content generation.
This paper devises and explores a novel method in that regard.
Our results show that with the devised scheme, it is possible to create fake visual content with alarming realism.
arXiv Detail & Related papers (2023-09-26T08:24:37Z) - Qualitative Failures of Image Generation Models and Their Application in Detecting Deepfakes [43.37813040320147]
A gap remains between the quality of generated images and those found in the real world.
By understanding these failures, we can identify areas where these models need improvement.
The prevalence of deep fakes in today's society is a serious concern.
arXiv Detail & Related papers (2023-03-29T15:26:44Z) - Responsible Disclosure of Generative Models Using Scalable
Fingerprinting [70.81987741132451]
Deep generative models have achieved a qualitatively new level of performance.
There are concerns on how this technology can be misused to spoof sensors, generate deep fakes, and enable misinformation at scale.
Our work enables a responsible disclosure of such state-of-the-art generative models, that allows researchers and companies to fingerprint their models.
arXiv Detail & Related papers (2020-12-16T03:51:54Z) - On Attribution of Deepfakes [25.334701225923517]
generative adversarial networks have made it possible to efficiently synthesize and alter media at scale.
Malicious individuals now rely on these machine-generated media, or deepfakes, to manipulate social discourse.
We present a technique to optimize over the source of entropy of each generative model to attribute a deepfake to one of the models.
arXiv Detail & Related papers (2020-08-20T20:25:18Z) - Artificial Fingerprinting for Generative Models: Rooting Deepfake
Attribution in Training Data [64.65952078807086]
Photorealistic image generation has reached a new level of quality due to the breakthroughs of generative adversarial networks (GANs)
Yet, the dark side of such deepfakes, the malicious use of generated media, raises concerns about visual misinformation.
We seek a proactive and sustainable solution on deepfake detection by introducing artificial fingerprints into the models.
arXiv Detail & Related papers (2020-07-16T16:49:55Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.