GenDet: Towards Good Generalizations for AI-Generated Image Detection
- URL: http://arxiv.org/abs/2312.08880v1
- Date: Tue, 12 Dec 2023 11:20:45 GMT
- Title: GenDet: Towards Good Generalizations for AI-Generated Image Detection
- Authors: Mingjian Zhu, Hanting Chen, Mouxiao Huang, Wei Li, Hailin Hu, Jie Hu,
Yunhe Wang
- Abstract summary: Existing methods can effectively detect images generated by seen generators, but it is challenging to detect those generated by unseen generators.
This paper addresses the unseen-generator detection problem by considering this task from the perspective of anomaly detection.
Our method encourages smaller output discrepancies between the student and the teacher models for real images while aiming for larger discrepancies for fake images.
- Score: 27.899521298845357
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The misuse of AI imagery can have harmful societal effects, prompting the
creation of detectors to combat issues like the spread of fake news. Existing
methods can effectively detect images generated by seen generators, but it is
challenging to detect those generated by unseen generators. They do not
concentrate on amplifying the output discrepancy when detectors process real
versus fake images. This results in a close output distribution of real and
fake samples, increasing classification difficulty in detecting unseen
generators. This paper addresses the unseen-generator detection problem by
considering this task from the perspective of anomaly detection and proposes an
adversarial teacher-student discrepancy-aware framework. Our method encourages
smaller output discrepancies between the student and the teacher models for
real images while aiming for larger discrepancies for fake images. We employ
adversarial learning to train a feature augmenter, which promotes smaller
discrepancies between teacher and student networks when the inputs are fake
images. Our method has achieved state-of-the-art on public benchmarks, and the
visualization results show that a large output discrepancy is maintained when
faced with various types of generators.
Related papers
- RIGID: A Training-free and Model-Agnostic Framework for Robust AI-Generated Image Detection [60.960988614701414]
RIGID is a training-free and model-agnostic method for robust AI-generated image detection.
RIGID significantly outperforms existing trainingbased and training-free detectors.
arXiv Detail & Related papers (2024-05-30T14:49:54Z) - GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - Detecting Generated Images by Real Images Only [64.12501227493765]
Existing generated image detection methods detect visual artifacts in generated images or learn discriminative features from both real and generated images by massive training.
This paper approaches the generated image detection problem from a new perspective: Start from real images.
By finding the commonality of real images and mapping them to a dense subspace in feature space, the goal is that generated images, regardless of their generative model, are then projected outside the subspace.
arXiv Detail & Related papers (2023-11-02T03:09:37Z) - Towards Universal Fake Image Detectors that Generalize Across Generative Models [36.18427140427858]
We show that the existing paradigm, which consists of training a deep network for real-vs-fake classification, fails to detect fake images from newer breeds of generative models.
We propose to perform real-vs-fake classification without learning, using a feature space not explicitly trained to distinguish real from fake images.
arXiv Detail & Related papers (2023-02-20T18:59:04Z) - SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for
Exposing Deepfakes [7.553507857251396]
We propose a novel deepfake detector, called SeeABLE, that formalizes the detection problem as a (one-class) out-of-distribution detection task.
SeeABLE pushes perturbed faces towards predefined prototypes using a novel regression-based bounded contrastive loss.
We show that our model convincingly outperforms competing state-of-the-art detectors, while exhibiting highly encouraging generalization capabilities.
arXiv Detail & Related papers (2022-11-21T09:38:30Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - What makes fake images detectable? Understanding properties that
generalize [55.4211069143719]
Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
arXiv Detail & Related papers (2020-08-24T17:50:28Z) - Generative Damage Learning for Concrete Aging Detection using
Auto-flight Images [0.7612218105739107]
We propose an anomaly detection method using unpaired image-to-image translation mapping from damaged images to reverse aging fakes that approximates healthy conditions.
We apply our method to field studies, and we examine the usefulness of our method for health monitoring of concrete damage.
arXiv Detail & Related papers (2020-06-27T02:25:12Z) - Just Noticeable Difference for Machines to Generate Adversarial Images [0.34376560669160383]
The proposed method is based on a popular concept of experimental psychology called, Just Noticeable Difference.
The adversarial images generated in this study looks more natural compared to the output of state of the art adversarial image generators.
arXiv Detail & Related papers (2020-01-29T19:42:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.