Discovering Transferable Forensic Features for CNN-generated Images
Detection
- URL: http://arxiv.org/abs/2208.11342v1
- Date: Wed, 24 Aug 2022 07:48:07 GMT
- Title: Discovering Transferable Forensic Features for CNN-generated Images
Detection
- Authors: Keshigeyan Chandrasegaran, Ngoc-Trung Tran, Alexander Binder, Ngai-Man
Cheung
- Abstract summary: We conduct the first analytical study to discover and understand transferable forensic features (T-FF) in universal detectors.
In this work, we propose a novel forensic feature relevance statistic (FF-RS) to quantify and discover T-FF in universal detectors.
Our investigations uncover an unexpected finding: color is a critical T-FF in universal detectors.
- Score: 100.12017277070576
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Visual counterfeits are increasingly causing an existential conundrum in
mainstream media with rapid evolution in neural image synthesis methods. Though
detection of such counterfeits has been a taxing problem in the image forensics
community, a recent class of forensic detectors -- universal detectors -- are
able to surprisingly spot counterfeit images regardless of generator
architectures, loss functions, training datasets, and resolutions. This
intriguing property suggests the possible existence of transferable forensic
features (T-FF) in universal detectors. In this work, we conduct the first
analytical study to discover and understand T-FF in universal detectors. Our
contributions are 2-fold: 1) We propose a novel forensic feature relevance
statistic (FF-RS) to quantify and discover T-FF in universal detectors and, 2)
Our qualitative and quantitative investigations uncover an unexpected finding:
color is a critical T-FF in universal detectors. Code and models are available
at https://keshik6.github.io/transferable-forensic-features/
Related papers
- FakeInversion: Learning to Detect Images from Unseen Text-to-Image Models by Inverting Stable Diffusion [18.829659846356765]
We propose a new synthetic image detector that uses features obtained by inverting an open-source pre-trained Stable Diffusion model.
We show that these inversion features enable our detector to generalize well to unseen generators of high visual fidelity.
We introduce a new challenging evaluation protocol that uses reverse image search to mitigate stylistic and thematic biases in the detector evaluation.
arXiv Detail & Related papers (2024-06-12T19:14:58Z) - How Generalizable are Deepfake Image Detectors? An Empirical Study [4.42204674141385]
We present the first empirical study on the generalizability of deepfake detectors.
Our study utilizes six deepfake datasets, five deepfake image detection methods, and two model augmentation approaches.
We find that detectors are learning unwanted properties specific to synthesis methods and struggling to extract discriminative features.
arXiv Detail & Related papers (2023-08-08T10:30:34Z) - Attention Consistency Refined Masked Frequency Forgery Representation
for Generalizing Face Forgery Detection [96.539862328788]
Existing forgery detection methods suffer from unsatisfactory generalization ability to determine the authenticity in the unseen domain.
We propose a novel Attention Consistency Refined masked frequency forgery representation model toward generalizing face forgery detection algorithm (ACMF)
Experiment results on several public face forgery datasets demonstrate the superior performance of the proposed method compared with the state-of-the-art methods.
arXiv Detail & Related papers (2023-07-21T08:58:49Z) - SeeABLE: Soft Discrepancies and Bounded Contrastive Learning for
Exposing Deepfakes [7.553507857251396]
We propose a novel deepfake detector, called SeeABLE, that formalizes the detection problem as a (one-class) out-of-distribution detection task.
SeeABLE pushes perturbed faces towards predefined prototypes using a novel regression-based bounded contrastive loss.
We show that our model convincingly outperforms competing state-of-the-art detectors, while exhibiting highly encouraging generalization capabilities.
arXiv Detail & Related papers (2022-11-21T09:38:30Z) - Multimodal Graph Learning for Deepfake Detection [10.077496841634135]
Existing deepfake detectors face several challenges in achieving robustness and generalization.
We propose a novel framework, namely Multimodal Graph Learning (MGL), that leverages information from multiple modalities.
Our proposed method aims to effectively identify and utilize distinguishing features for deepfake detection.
arXiv Detail & Related papers (2022-09-12T17:17:49Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - Robust and Accurate Object Detection via Adversarial Learning [111.36192453882195]
This work augments the fine-tuning stage for object detectors by exploring adversarial examples.
Our approach boosts the performance of state-of-the-art EfficientDets by +1.1 mAP on the object detection benchmark.
arXiv Detail & Related papers (2021-03-23T19:45:26Z) - Generalizing Face Forgery Detection with High-frequency Features [63.33397573649408]
Current CNN-based detectors tend to overfit to method-specific color textures and thus fail to generalize.
We propose to utilize the high-frequency noises for face forgery detection.
The first is the multi-scale high-frequency feature extraction module that extracts high-frequency noises at multiple scales.
The second is the residual-guided spatial attention module that guides the low-level RGB feature extractor to concentrate more on forgery traces from a new perspective.
arXiv Detail & Related papers (2021-03-23T08:19:21Z) - SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier
Domain [10.418647759223964]
We show how analysis in the Fourier domain of input images and feature maps can be used to distinguish benign test samples from adversarial images.
We propose two novel detection methods.
arXiv Detail & Related papers (2021-03-04T12:48:28Z) - Universal Adversarial Perturbations Through the Lens of Deep
Steganography: Towards A Fourier Perspective [78.05383266222285]
A human imperceptible perturbation can be generated to fool a deep neural network (DNN) for most images.
A similar phenomenon has been observed in the deep steganography task, where a decoder network can retrieve a secret image back from a slightly perturbed cover image.
We propose two new variants of universal perturbations: (1) Universal Secret Adversarial Perturbation (USAP) that simultaneously achieves attack and hiding; (2) high-pass UAP (HP-UAP) that is less visible to the human eye.
arXiv Detail & Related papers (2021-02-12T12:26:39Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.