Towards Robust GAN-generated Image Detection: a Multi-view Completion
Representation
- URL: http://arxiv.org/abs/2306.01364v1
- Date: Fri, 2 Jun 2023 08:38:02 GMT
- Title: Towards Robust GAN-generated Image Detection: a Multi-view Completion
Representation
- Authors: Chi Liu, Tianqing Zhu, Sheng Shen, Wanlei Zhou
- Abstract summary: GAN-generated image detection now becomes the first line of defense against the malicious uses of machine-synthesized image manipulations such as deepfakes.
We propose a robust detection framework based on a novel multi-view image completion representation.
We evaluate the generalization ability of our framework across six popular GANs at different resolutions and its robustness against a broad range of perturbation attacks.
- Score: 27.483031588071942
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: GAN-generated image detection now becomes the first line of defense against
the malicious uses of machine-synthesized image manipulations such as
deepfakes. Although some existing detectors work well in detecting clean, known
GAN samples, their success is largely attributable to overfitting unstable
features such as frequency artifacts, which will cause failures when facing
unknown GANs or perturbation attacks. To overcome the issue, we propose a
robust detection framework based on a novel multi-view image completion
representation. The framework first learns various view-to-image tasks to model
the diverse distributions of genuine images. Frequency-irrelevant features can
be represented from the distributional discrepancies characterized by the
completion models, which are stable, generalized, and robust for detecting
unknown fake patterns. Then, a multi-view classification is devised with
elaborated intra- and inter-view learning strategies to enhance view-specific
feature representation and cross-view feature aggregation, respectively. We
evaluated the generalization ability of our framework across six popular GANs
at different resolutions and its robustness against a broad range of
perturbation attacks. The results confirm our method's improved effectiveness,
generalization, and robustness over various baselines.
Related papers
- MFCLIP: Multi-modal Fine-grained CLIP for Generalizable Diffusion Face Forgery Detection [64.29452783056253]
The rapid development of photo-realistic face generation methods has raised significant concerns in society and academia.
Although existing approaches mainly capture face forgery patterns using image modality, other modalities like fine-grained noises and texts are not fully explored.
We propose a novel multi-modal fine-grained CLIP (MFCLIP) model, which mines comprehensive and fine-grained forgery traces across image-noise modalities.
arXiv Detail & Related papers (2024-09-15T13:08:59Z) - GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - DiAD: A Diffusion-based Framework for Multi-class Anomaly Detection [55.48770333927732]
We propose a Difusion-based Anomaly Detection (DiAD) framework for multi-class anomaly detection.
It consists of a pixel-space autoencoder, a latent-space Semantic-Guided (SG) network with a connection to the stable diffusion's denoising network, and a feature-space pre-trained feature extractor.
Experiments on MVTec-AD and VisA datasets demonstrate the effectiveness of our approach.
arXiv Detail & Related papers (2023-12-11T18:38:28Z) - DiG-IN: Diffusion Guidance for Investigating Networks -- Uncovering Classifier Differences Neuron Visualisations and Visual Counterfactual Explanations [35.458709912618176]
Deep learning has led to huge progress in complex image classification tasks like ImageNet, unexpected failure modes, e.g. via spurious features.
For safety-critical tasks the black-box nature of their decisions is problematic, and explanations or at least methods which make decisions plausible are needed urgently.
We address these problems by generating images that optimize a classifier-derived objective using a framework for guided image generation.
arXiv Detail & Related papers (2023-11-29T17:35:29Z) - A Dual Attentive Generative Adversarial Network for Remote Sensing Image
Change Detection [6.906936669510404]
We propose a dual attentive generative adversarial network for achieving very high-resolution remote sensing image change detection tasks.
The DAGAN framework has better performance with 85.01% mean IoU and 91.48% mean F1 score than advanced methods on the LEVIR dataset.
arXiv Detail & Related papers (2023-10-03T08:26:27Z) - Hierarchical Forgery Classifier On Multi-modality Face Forgery Clues [61.37306431455152]
We propose a novel Hierarchical Forgery for Multi-modality Face Forgery Detection (HFC-MFFD)
The HFC-MFFD learns robust patches-based hybrid representation to enhance forgery authentication in multiple-modality scenarios.
The specific hierarchical face forgery is proposed to alleviate the class imbalance problem and further boost detection performance.
arXiv Detail & Related papers (2022-12-30T10:54:29Z) - ViewFool: Evaluating the Robustness of Visual Recognition to Adversarial
Viewpoints [42.64942578228025]
We propose a novel method called ViewFool to find adversarial viewpoints that mislead visual recognition models.
By encoding real-world objects as neural radiance fields (NeRF), ViewFool characterizes a distribution of diverse adversarial viewpoints.
arXiv Detail & Related papers (2022-10-08T03:06:49Z) - A Method for Evaluating Deep Generative Models of Images via Assessing
the Reproduction of High-order Spatial Context [9.00018232117916]
Generative adversarial networks (GANs) are one kind of DGM which are widely employed.
In this work, we demonstrate several objective tests of images output by two popular GAN architectures.
We designed several context models (SCMs) of distinct image features that can be recovered after generation by a trained GAN.
arXiv Detail & Related papers (2021-11-24T15:58:10Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - Camera Invariant Feature Learning for Generalized Face Anti-spoofing [95.30490139294136]
We describe a framework that eliminates the influence of inherent variance from acquisition cameras at the feature level.
Experiments show that the proposed method can achieve better performance in both intra-dataset and cross-dataset settings.
arXiv Detail & Related papers (2021-01-25T13:40:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.