Detection, Attribution and Localization of GAN Generated Images
- URL: http://arxiv.org/abs/2007.10466v1
- Date: Mon, 20 Jul 2020 20:49:34 GMT
- Title: Detection, Attribution and Localization of GAN Generated Images
- Authors: Michael Goebel, Lakshmanan Nataraj, Tejaswi Nanjundaswamy, Tajuddin
Manhar Mohammed, Shivkumar Chandrasekaran and B.S. Manjunath
- Abstract summary: We propose a novel approach to detect, attribute and localize GAN generated images.
A deep learning network is then trained on these features to detect, attribute and localize these GAN generated/manipulated images.
A large scale evaluation of our approach on 5 GAN datasets shows promising results in detecting GAN generated images.
- Score: 24.430919035100317
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advances in Generative Adversarial Networks (GANs) have led to the
creation of realistic-looking digital images that pose a major challenge to
their detection by humans or computers. GANs are used in a wide range of tasks,
from modifying small attributes of an image (StarGAN [14]), transferring
attributes between image pairs (CycleGAN [91]), as well as generating entirely
new images (ProGAN [36], StyleGAN [37], SPADE/GauGAN [64]). In this paper, we
propose a novel approach to detect, attribute and localize GAN generated images
that combines image features with deep learning methods. For every image,
co-occurrence matrices are computed on neighborhood pixels of RGB channels in
different directions (horizontal, vertical and diagonal). A deep learning
network is then trained on these features to detect, attribute and localize
these GAN generated/manipulated images. A large scale evaluation of our
approach on 5 GAN datasets comprising over 2.76 million images (ProGAN,
StarGAN, CycleGAN, StyleGAN and SPADE/GauGAN) shows promising results in
detecting GAN generated images.
Related papers
- Contrasting Deepfakes Diffusion via Contrastive Learning and Global-Local Similarities [88.398085358514]
Contrastive Deepfake Embeddings (CoDE) is a novel embedding space specifically designed for deepfake detection.
CoDE is trained via contrastive learning by additionally enforcing global-local similarities.
arXiv Detail & Related papers (2024-07-29T18:00:10Z) - Image Deblurring using GAN [0.0]
This project focuses on the application of Generative Adversarial Network (GAN) in image deblurring.
The project defines a GAN model inflow and trains it with GoPRO dataset.
The network can obtain sharper pixels in image, achieving an average of 29.3 Peak Signal-to-Noise Ratio (PSNR) and 0.72 Structural Similarity Assessment (SSIM)
arXiv Detail & Related papers (2023-12-15T02:43:30Z) - GH-Feat: Learning Versatile Generative Hierarchical Features from GANs [61.208757845344074]
We show that a generative feature learned from image synthesis exhibits great potentials in solving a wide range of computer vision tasks.
We first train an encoder by considering the pretrained StyleGAN generator as a learned loss function.
The visual features produced by our encoder, termed as Generative Hierarchical Features (GH-Feat), highly align with the layer-wise GAN representations.
arXiv Detail & Related papers (2023-01-12T21:59:46Z) - Using a Conditional Generative Adversarial Network to Control the
Statistical Characteristics of Generated Images for IACT Data Analysis [55.41644538483948]
We divide images into several classes according to the value of some property of the image, and then specify the required class when generating new images.
In the case of images from Imaging Atmospheric Cherenkov Telescopes (IACTs), an important property is the total brightness of all image pixels (image size)
We used a cGAN technique to generate images similar to whose obtained in the TAIGA-IACT experiment.
arXiv Detail & Related papers (2022-11-28T22:30:33Z) - Joint Learning of Deep Texture and High-Frequency Features for
Computer-Generated Image Detection [24.098604827919203]
We propose a joint learning strategy with deep texture and high-frequency features for CG image detection.
A semantic segmentation map is generated to guide the affine transformation operation.
The combination of the original image and the high-frequency components of the original and rendered images are fed into a multi-branch neural network equipped with attention mechanisms.
arXiv Detail & Related papers (2022-09-07T17:30:40Z) - Learning Hierarchical Graph Representation for Image Manipulation
Detection [50.04902159383709]
The objective of image manipulation detection is to identify and locate the manipulated regions in the images.
Recent approaches mostly adopt the sophisticated Convolutional Neural Networks (CNNs) to capture the tampering artifacts left in the images.
We propose a hierarchical Graph Convolutional Network (HGCN-Net), which consists of two parallel branches.
arXiv Detail & Related papers (2022-01-15T01:54:25Z) - A Method for Evaluating Deep Generative Models of Images via Assessing
the Reproduction of High-order Spatial Context [9.00018232117916]
Generative adversarial networks (GANs) are one kind of DGM which are widely employed.
In this work, we demonstrate several objective tests of images output by two popular GAN architectures.
We designed several context models (SCMs) of distinct image features that can be recovered after generation by a trained GAN.
arXiv Detail & Related papers (2021-11-24T15:58:10Z) - Low-Rank Subspaces in GANs [101.48350547067628]
This work introduces low-rank subspaces that enable more precise control of GAN generation.
LowRankGAN is able to find the low-dimensional representation of attribute manifold.
Experiments on state-of-the-art GAN models (including StyleGAN2 and BigGAN) trained on various datasets demonstrate the effectiveness of our LowRankGAN.
arXiv Detail & Related papers (2021-06-08T16:16:32Z) - Towards Discovery and Attribution of Open-world GAN Generated Images [18.10496076534083]
We present an iterative algorithm for discovering images generated from previously unseen GANs.
Our algorithm consists of multiple components including network training, out-of-distribution detection, clustering, merge and refine steps.
Our experiments demonstrate the effectiveness of our approach to discover new GANs and can be used in an open-world setup.
arXiv Detail & Related papers (2021-05-10T18:00:13Z) - Generative Hierarchical Features from Synthesizing Images [65.66756821069124]
We show that learning to synthesize images can bring remarkable hierarchical visual features that are generalizable across a wide range of applications.
The visual feature produced by our encoder, termed as Generative Hierarchical Feature (GH-Feat), has strong transferability to both generative and discriminative tasks.
arXiv Detail & Related papers (2020-07-20T18:04:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.