CNN Detection of GAN-Generated Face Images based on Cross-Band
Co-occurrences Analysis
- URL: http://arxiv.org/abs/2007.12909v2
- Date: Fri, 2 Oct 2020 12:43:28 GMT
- Title: CNN Detection of GAN-Generated Face Images based on Cross-Band
Co-occurrences Analysis
- Authors: Mauro Barni, Kassem Kallas, Ehsan Nowroozi, Benedetta Tondi
- Abstract summary: Last-generation GAN models allow to generate synthetic images which are visually indistinguishable from natural ones.
We propose a method for distinguishing GAN-generated from natural images by exploiting inconsistencies among spectral bands.
- Score: 34.41021278275805
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Last-generation GAN models allow to generate synthetic images which are
visually indistinguishable from natural ones, raising the need to develop tools
to distinguish fake and natural images thus contributing to preserve the
trustworthiness of digital images. While modern GAN models can generate very
high-quality images with no visible spatial artifacts, reconstruction of
consistent relationships among colour channels is expectedly more difficult. In
this paper, we propose a method for distinguishing GAN-generated from natural
images by exploiting inconsistencies among spectral bands, with specific focus
on the generation of synthetic face images. Specifically, we use cross-band
co-occurrence matrices, in addition to spatial co-occurrence matrices, as input
to a CNN model, which is trained to distinguish between real and synthetic
faces. The results of our experiments confirm the goodness of our approach
which outperforms a similar detection technique based on intra-band spatial
co-occurrences only. The performance gain is particularly significant with
regard to robustness against post-processing, like geometric transformations,
filtering and contrast manipulations.
Related papers
- Intriguing properties of synthetic images: from generative adversarial
networks to diffusion models [19.448196464632]
It is important to gain insight into which image features better discriminate fake images from real ones.
In this paper we report on our systematic study of a large number of image generators of different families, aimed at discovering the most forensically relevant characteristics of real and generated images.
arXiv Detail & Related papers (2023-04-13T11:13:19Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Detecting High-Quality GAN-Generated Face Images using Neural Networks [23.388645531702597]
We propose a new strategy to differentiate GAN-generated images from authentic images by leveraging spectral band discrepancies.
In particular, we enable the digital preservation of face images using the Cross-band co-occurrence matrix and spatial co-occurrence matrix.
We show that the performance boost is particularly significant and achieves more than 92% in different post-processing environments.
arXiv Detail & Related papers (2022-03-03T13:53:27Z) - Exploring the Asynchronous of the Frequency Spectra of GAN-generated
Facial Images [19.126496628073376]
We propose a new approach that explores the asynchronous frequency spectra of color channels, which is simple but effective for training both unsupervised and supervised learning models to distinguish GAN-based synthetic images.
Our experimental results show that the discrepancy of spectra in the frequency domain is a practical artifact to effectively detect various types of GAN-based generated images.
arXiv Detail & Related papers (2021-12-15T11:34:11Z) - Ensembling with Deep Generative Views [72.70801582346344]
generative models can synthesize "views" of artificial images that mimic real-world variations, such as changes in color or pose.
Here, we investigate whether such views can be applied to real images to benefit downstream analysis tasks such as image classification.
We use StyleGAN2 as the source of generative augmentations and investigate this setup on classification tasks involving facial attributes, cat faces, and cars.
arXiv Detail & Related papers (2021-04-29T17:58:35Z) - IMAGINE: Image Synthesis by Image-Guided Model Inversion [79.4691654458141]
We introduce an inversion based method, denoted as IMAge-Guided model INvErsion (IMAGINE), to generate high-quality and diverse images.
We leverage the knowledge of image semantics from a pre-trained classifier to achieve plausible generations.
IMAGINE enables the synthesis procedure to simultaneously 1) enforce semantic specificity constraints during the synthesis, 2) produce realistic images without generator training, and 3) give users intuitive control over the generation process.
arXiv Detail & Related papers (2021-04-13T02:00:24Z) - Identity-Aware CycleGAN for Face Photo-Sketch Synthesis and Recognition [61.87842307164351]
We first propose an Identity-Aware CycleGAN (IACycleGAN) model that applies a new perceptual loss to supervise the image generation network.
It improves CycleGAN on photo-sketch synthesis by paying more attention to the synthesis of key facial regions, such as eyes and nose.
We develop a mutual optimization procedure between the synthesis model and the recognition model, which iteratively synthesizes better images by IACycleGAN.
arXiv Detail & Related papers (2021-03-30T01:30:08Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z) - Deep Snow: Synthesizing Remote Sensing Imagery with Generative
Adversarial Nets [0.5249805590164901]
generative adversarial networks (GANs) can be used to generate realistic pervasive changes in remote sensing imagery.
We investigate some transformation quality metrics based on deep embedding of the generated and real images.
arXiv Detail & Related papers (2020-05-18T17:05:00Z) - Pathological Retinal Region Segmentation From OCT Images Using Geometric
Relation Based Augmentation [84.7571086566595]
We propose improvements over previous GAN-based medical image synthesis methods by jointly encoding the intrinsic relationship of geometry and shape.
The proposed method outperforms state-of-the-art segmentation methods on the public RETOUCH dataset having images captured from different acquisition procedures.
arXiv Detail & Related papers (2020-03-31T11:50:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.