GenDet: Towards Good Generalizations for AI-Generated Image Detection
- URL: http://arxiv.org/abs/2312.08880v1
- Date: Tue, 12 Dec 2023 11:20:45 GMT
- Title: GenDet: Towards Good Generalizations for AI-Generated Image Detection
- Authors: Mingjian Zhu, Hanting Chen, Mouxiao Huang, Wei Li, Hailin Hu, Jie Hu,
Yunhe Wang
- Abstract summary: Existing methods can effectively detect images generated by seen generators, but it is challenging to detect those generated by unseen generators.
This paper addresses the unseen-generator detection problem by considering this task from the perspective of anomaly detection.
Our method encourages smaller output discrepancies between the student and the teacher models for real images while aiming for larger discrepancies for fake images.
- Score: 27.899521298845357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The misuse of AI imagery can have harmful societal effects, prompting the
creation of detectors to combat issues like the spread of fake news. Existing
methods can effectively detect images generated by seen generators, but it is
challenging to detect those generated by unseen generators. They do not
concentrate on amplifying the output discrepancy when detectors process real
versus fake images. This results in a close output distribution of real and
fake samples, increasing classification difficulty in detecting unseen
generators. This paper addresses the unseen-generator detection problem by
considering this task from the perspective of anomaly detection and proposes an
adversarial teacher-student discrepancy-aware framework. Our method encourages
smaller output discrepancies between the student and the teacher models for
real images while aiming for larger discrepancies for fake images. We employ
adversarial learning to train a feature augmenter, which promotes smaller
discrepancies between teacher and student networks when the inputs are fake
images. Our method has achieved state-of-the-art on public benchmarks, and the
visualization results show that a large output discrepancy is maintained when
faced with various types of generators.
Related papers
- Semi-Truths: A Large-Scale Dataset of AI-Augmented Images for Evaluating Robustness of AI-Generated Image detectors [62.63467652611788]
We introduce SEMI-TRUTHS, featuring 27,600 real images, 223,400 masks, and 1,472,700 AI-augmented images.
Each augmented image is accompanied by metadata for standardized and targeted evaluation of detector robustness.
Our findings suggest that state-of-the-art detectors exhibit varying sensitivities to the types and degrees of perturbations, data distributions, and augmentation methods used.
arXiv Detail & Related papers (2024-11-12T01:17:27Z) - Detecting AutoEncoder is Enough to Catch LDM Generated Images [0.0]
This paper proposes a novel method for detecting images generated by Latent Diffusion Models (LDM) by identifying artifacts introduced by their autoencoders.
By training a detector to distinguish between real images and those reconstructed by the LDM autoencoder, the method enables detection of generated images without directly training on them.
Experimental results show high detection accuracy with minimal false positives, making this approach a promising tool for combating fake images.
arXiv Detail & Related papers (2024-11-10T12:17:32Z) - RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Towards Universal Fake Image Detectors that Generalize Across Generative Models [36.18427140427858]
We show that the existing paradigm, which consists of training a deep network for real-vs-fake classification, fails to detect fake images from newer breeds of generative models.
We propose to perform real-vs-fake classification without learning, using a feature space not explicitly trained to distinguish real from fake images.
arXiv Detail & Related papers (2023-02-20T18:59:04Z) - SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for
Exposing Deepfakes [7.553507857251396]
We propose a novel deepfake detector, called SeeABLE, that formalizes the detection problem as a (one-class) out-of-distribution detection task.
SeeABLE pushes perturbed faces towards predefined prototypes using a novel regression-based bounded contrastive loss.
We show that our model convincingly outperforms competing state-of-the-art detectors, while exhibiting highly encouraging generalization capabilities.
arXiv Detail & Related papers (2022-11-21T09:38:30Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - What makes fake images detectable? Understanding properties that
generalize [55.4211069143719]
Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
arXiv Detail & Related papers (2020-08-24T17:50:28Z) - Just Noticeable Difference for Machines to Generate Adversarial Images [0.34376560669160383]
The proposed method is based on a popular concept of experimental psychology called, Just Noticeable Difference.
The adversarial images generated in this study looks more natural compared to the output of state of the art adversarial image generators.
arXiv Detail & Related papers (2020-01-29T19:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.