AI-Generated Image Detectors Overrely on Global Artifacts: Evidence from Inpainting Exchange
- URL: http://arxiv.org/abs/2602.00192v1
- Date: Fri, 30 Jan 2026 09:14:10 GMT
- Title: AI-Generated Image Detectors Overrely on Global Artifacts: Evidence from Inpainting Exchange
- Authors: Elif Nebioglu, Emirhan BilgiƧ, Adrian Popescu,
- Abstract summary: We show that VAE-based reconstruction induces a subtle but pervasive spectral shift across the entire image, including unedited regions.<n>We introduce Inpainting Exchange (INP-X), an operation that restores original pixels outside the edited region while preserving all synthesized content.<n>Our findings highlight the need for content-aware detection. Indeed, training on our dataset yields better generalization and localization than standard inpainting.
- Score: 1.2944480428047747
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Modern deep learning-based inpainting enables realistic local image manipulation, raising critical challenges for reliable detection. However, we observe that current detectors primarily rely on global artifacts that appear as inpainting side effects, rather than on locally synthesized content. We show that this behavior occurs because VAE-based reconstruction induces a subtle but pervasive spectral shift across the entire image, including unedited regions. To isolate this effect, we introduce Inpainting Exchange (INP-X), an operation that restores original pixels outside the edited region while preserving all synthesized content. We create a 90K test dataset including real, inpainted, and exchanged images to evaluate this phenomenon. Under this intervention, pretrained state-of-the-art detectors, including commercial ones, exhibit a dramatic drop in accuracy (e.g., from 91\% to 55\%), frequently approaching chance level. We provide a theoretical analysis linking this behavior to high-frequency attenuation caused by VAE information bottlenecks. Our findings highlight the need for content-aware detection. Indeed, training on our dataset yields better generalization and localization than standard inpainting. Our dataset and code are publicly available at https://github.com/emirhanbilgic/INP-X.
Related papers
- Detecting Localized Deepfakes: How Well Do Synthetic Image Detectors Handle Inpainting? [2.6743542260081408]
generative AI has enabled highly realistic image manipulations, including inpainting and region-level editing.<n>These approaches preserve most of the original visual context and are increasingly exploited in cybersecurity-relevant threat scenarios.<n>This work presents a systematic evaluation of state-of-the-art detectors, originally trained for the deepfake detection on fully synthetic images, when applied to a distinct challenge: localized inpainting detection.
arXiv Detail & Related papers (2025-12-18T15:54:51Z) - Zooming In on Fakes: A Novel Dataset for Localized AI-Generated Image Detection with Forgery Amplification Approach [69.01456182499486]
textbfBR-Gen is a large-scale dataset of 150,000 locally forged images with diverse scene-aware annotations.<n>textbfNFA-ViT is a Noise-guided Forgery Amplification Vision Transformer that enhances the detection of localized forgeries.
arXiv Detail & Related papers (2025-04-16T09:57:23Z) - PIGUIQA: A Physical Imaging Guided Perceptual Framework for Underwater Image Quality Assessment [59.9103803198087]
We propose a Physical Imaging Guided perceptual framework for Underwater Image Quality Assessment (UIQA)<n>By leveraging underwater radiative transfer theory, we integrate physics-based imaging estimations to establish quantitative metrics for these distortions.<n>The proposed model accurately predicts image quality scores and achieves state-of-the-art performance.
arXiv Detail & Related papers (2024-12-20T03:31:45Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.<n>In this paper, we investigate how detection performance varies across model backbones, types, and datasets.<n>We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - Improving Synthetic Image Detection Towards Generalization: An Image Transformation Perspective [45.210030086193775]
Current synthetic image detection (SID) pipelines are primarily dedicated to crafting universal artifact features.<n>We propose SAFE, a lightweight and effective detector with three simple image transformations.<n>Our pipeline achieves a new state-of-the-art performance, with remarkable improvements of 4.5% in accuracy and 2.9% in average precision against existing methods.
arXiv Detail & Related papers (2024-08-13T09:01:12Z) - Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities [88.398085358514]
Contrastive Deepfake Embeddings (CoDE) is a novel embedding space specifically designed for deepfake detection.
CoDE is trained via contrastive learning by additionally enforcing global-local similarities.
arXiv Detail & Related papers (2024-07-29T18:00:10Z) - Perceptual Artifacts Localization for Image Synthesis Tasks [59.638307505334076]
We introduce a novel dataset comprising 10,168 generated images, each annotated with per-pixel perceptual artifact labels.
A segmentation model, trained on our proposed dataset, effectively localizes artifacts across a range of tasks.
We propose an innovative zoom-in inpainting pipeline that seamlessly rectifies perceptual artifacts in the generated images.
arXiv Detail & Related papers (2023-10-09T10:22:08Z) - TruFor: Leveraging all-round clues for trustworthy image forgery
detection and localization [17.270110456445806]
TruFor is a forensic framework that can be applied to a large variety of image manipulation methods.
We rely on the extraction of both high-level and low-level traces through a transformer-based fusion architecture.
Our method is able to reliably detect and localize both cheapfakes and deepfakes manipulations outperforming state-of-the-art works.
arXiv Detail & Related papers (2022-12-21T11:49:43Z) - Noise Doesn't Lie: Towards Universal Detection of Deep Inpainting [42.189768203036394]
We make the first attempt towards universal detection of deep inpainting, where the detection network can generalize well.
Our approach outperforms existing detection methods by a large margin and generalizes well to unseen deep inpainting techniques.
arXiv Detail & Related papers (2021-06-03T01:29:29Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - Inpainting Transformer for Anomaly Detection [0.0]
Inpainting Transformer (InTra) is trained to inpaint covered patches in a large sequence of image patches.
InTra achieves better than state-of-the-art results on the MVTec AD dataset for detection and localization.
arXiv Detail & Related papers (2021-04-28T17:27:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.