R$^2$BD: A Reconstruction-Based Method for Generalizable and Efficient Detection of Fake Images
- URL: http://arxiv.org/abs/2601.08867v1
- Date: Sun, 11 Jan 2026 02:31:51 GMT
- Title: R$^2$BD: A Reconstruction-Based Method for Generalizable and Efficient Detection of Fake Images
- Authors: Qingyu Liu, Zhongjie Ba, Jianmin Guo, Qiu Wang, Zhibo Wang, Jie Shi, Kui Ren,
- Abstract summary: We propose a novel fake image detection framework, called R$2$BD, built upon two key designs.<n>Experiments on the benchmark from 10 public datasets demonstrate that R$2$BD is over 22$times$ faster than existing reconstruction-based methods.<n>In cross-dataset evaluations, it outperforms state-of-the-art methods by an average of 13.87%.
- Score: 31.9904761238593
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, reconstruction-based methods have gained attention for AIGC image detection. These methods leverage pre-trained diffusion models to reconstruct inputs and measure residuals for distinguishing real from fake images. Their key advantage lies in reducing reliance on dataset-specific artifacts and improving generalization under distribution shifts. However, they are limited by significant inefficiency due to multi-step inversion and reconstruction, and their reliance on diffusion backbones further limits generalization to other generative paradigms such as GANs. In this paper, we propose a novel fake image detection framework, called R$^2$BD, built upon two key designs: (1) G-LDM, a unified reconstruction model that simulates the generation behaviors of VAEs, GANs, and diffusion models, thereby broadening the detection scope beyond prior diffusion-only approaches; and (2) a residual bias calculation module that distinguishes real and fake images in a single inference step, which is a significant efficiency improvement over existing methods that typically require 20$+$ steps. Extensive experiments on the benchmark from 10 public datasets demonstrate that R$^2$BD is over 22$\times$ faster than existing reconstruction-based methods while achieving superior detection accuracy. In cross-dataset evaluations, it outperforms state-of-the-art methods by an average of 13.87\%, showing strong efficiency and generalization across diverse generative methods. The code and dataset used for evaluation are available at https://github.com/QingyuLiu/RRBD.
Related papers
- Revisiting Reconstruction-based AI-generated Image Detection: A Geometric Perspective [50.83711509908479]
We introduce the Jacobian-Spectral Lower Bound for reconstruction error from a geometric perspective.<n>We show that real images off the reconstruction manifold exhibit a non-trivial error lower bound, while generated images on the manifold have near-zero error.<n>We propose ReGap, a training-free method that computes dynamic reconstruction error by leveraging structured editing operations.
arXiv Detail & Related papers (2025-10-29T03:45:03Z) - Single-Step Reconstruction-Free Anomaly Detection and Segmentation via Diffusion Models [1.1487074612765584]
We introduce Reconstruction-free Anomaly Detection with Attention-based diffusion models in Real-time (RADAR)<n>RADAR overcomes the limitations of reconstruction-based anomaly detection.<n>We evaluate RADAR on real-world 3D-printed material and the MVTec-AD dataset.
arXiv Detail & Related papers (2025-08-06T18:56:08Z) - Reconstruction-Free Anomaly Detection with Diffusion Models [30.099399014193573]
We propose a novel inversion-based anomaly detection (AD) approach - detection via noising in latent space.<n>In approximating the original probability flow ODE, we only enforce very few inversion steps to noise the clean image.<n>As the added noise is adaptively derived with the learned diffusion model, the original features for the clean testing image can still be leveraged to yield high detection accuracy.
arXiv Detail & Related papers (2025-04-08T04:23:43Z) - One-for-More: Continual Diffusion Model for Anomaly Detection [63.50488826645681]
Anomaly detection methods utilize diffusion models to generate or reconstruct normal samples when given arbitrary anomaly images.<n>Our study found that the diffusion model suffers from severe faithfulness hallucination'' and catastrophic forgetting''<n>We propose a continual diffusion model that uses gradient projection to achieve stable continual learning.
arXiv Detail & Related papers (2025-02-27T07:47:27Z) - PDA: Generalizable Detection of AI-Generated Images via Post-hoc Distribution Alignment [16.98090845687867]
Post-hoc Distribution Alignment (PDA) is a novel approach for the generalizable detection for AI-generated images.<n>Our work provides a flexible and effective solution for real-world fake image detection, advancing the generalization ability of detection systems.
arXiv Detail & Related papers (2025-02-15T13:55:34Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - ExposureDiffusion: Learning to Expose for Low-light Image Enhancement [87.08496758469835]
This work addresses the issue by seamlessly integrating a diffusion model with a physics-based exposure model.
Our method obtains significantly improved performance and reduced inference time compared with vanilla diffusion models.
The proposed framework can work with both real-paired datasets, SOTA noise models, and different backbone networks.
arXiv Detail & Related papers (2023-07-15T04:48:35Z) - Hierarchical Integration Diffusion Model for Realistic Image Deblurring [71.76410266003917]
Diffusion models (DMs) have been introduced in image deblurring and exhibited promising performance.
We propose the Hierarchical Integration Diffusion Model (HI-Diff), for realistic image deblurring.
Experiments on synthetic and real-world blur datasets demonstrate that our HI-Diff outperforms state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T12:18:20Z) - DC4L: Distribution Shift Recovery via Data-Driven Control for Deep Learning Models [4.374569172244273]
We propose to use control for learned models to recover from distribution shifts online.
Our method applies a sequence of semantic-preserving transformations to bring the shifted data closer in distribution to the training set.
We show that our method generalizes to composites of shifts from the ImageNet-C benchmark, achieving improvements in average accuracy of up to 9.81%.
arXiv Detail & Related papers (2023-02-20T22:06:26Z) - Deblurring via Stochastic Refinement [85.42730934561101]
We present an alternative framework for blind deblurring based on conditional diffusion models.
Our method is competitive in terms of distortion metrics such as PSNR.
arXiv Detail & Related papers (2021-12-05T04:36:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.