FDFtNet: Facing Off Fake Images using Fake Detection Fine-tuning Network
- URL: http://arxiv.org/abs/2001.01265v2
- Date: Mon, 10 Aug 2020 06:08:29 GMT
- Title: FDFtNet: Facing Off Fake Images using Fake Detection Fine-tuning Network
- Authors: Hyeonseong Jeon, Youngoh Bang, Simon S. Woo
- Abstract summary: We propose a light-weight fine-tuning neural network-based architecture called FaketNet.
Our approach aims to reuse popular pre-trained models with only a few images for fine-tuning to effectively detect fake images.
Our tNet achieves an overall accuracy of 9029% in detecting fake images generated from the GANs-based dataset.
- Score: 19.246576904646172
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Creating fake images and videos such as "Deepfake" has become much easier
these days due to the advancement in Generative Adversarial Networks (GANs).
Moreover, recent research such as the few-shot learning can create highly
realistic personalized fake images with only a few images. Therefore, the
threat of Deepfake to be used for a variety of malicious intents such as
propagating fake images and videos becomes prevalent. And detecting these
machine-generated fake images has been quite challenging than ever. In this
work, we propose a light-weight robust fine-tuning neural network-based
classifier architecture called Fake Detection Fine-tuning Network (FDFtNet),
which is capable of detecting many of the new fake face image generation
models, and can be easily combined with existing image classification networks
and finetuned on a few datasets. In contrast to many existing methods, our
approach aims to reuse popular pre-trained models with only a few images for
fine-tuning to effectively detect fake images. The core of our approach is to
introduce an image-based self-attention module called Fine-Tune Transformer
that uses only the attention module and the down-sampling layer. This module is
added to the pre-trained model and fine-tuned on a few data to search for new
sets of feature space to detect fake images. We experiment with our FDFtNet on
the GANsbased dataset (Progressive Growing GAN) and Deepfake-based dataset
(Deepfake and Face2Face) with a small input image resolution of 64x64 that
complicates detection. Our FDFtNet achieves an overall accuracy of 90.29% in
detecting fake images generated from the GANs-based dataset, outperforming the
state-of-the-art.
Related papers
- On the Effectiveness of Dataset Alignment for Fake Image Detection [28.68129042301801]
A good detector should focus on the generative models fingerprints while ignoring image properties such as semantic content, resolution, file format, etc.
In this work, we argue that in addition to these algorithmic choices, we also require a well aligned dataset of real/fake images to train a robust detector.
For the family of LDMs, we propose a very simple way to achieve this: we reconstruct all the real images using the LDMs autoencoder, without any denoising operation. We then train a model to separate these real images from their reconstructions.
arXiv Detail & Related papers (2024-10-15T17:58:07Z) - AntifakePrompt: Prompt-Tuned Vision-Language Models are Fake Image Detectors [24.78672820633581]
Deep generative models can create remarkably fake images while raising concerns about misinformation and copyright infringement.
Deepfake detection technique is developed to distinguish between real and fake images.
We propose a novel approach called AntifakePrompt, using Vision-Language Models and prompt tuning techniques.
arXiv Detail & Related papers (2023-10-26T14:23:45Z) - Parents and Children: Distinguishing Multimodal DeepFakes from Natural Images [60.34381768479834]
Recent advancements in diffusion models have enabled the generation of realistic deepfakes from textual prompts in natural language.
We pioneer a systematic study on deepfake detection generated by state-of-the-art diffusion models.
arXiv Detail & Related papers (2023-04-02T10:25:09Z) - Towards Universal Fake Image Detectors that Generalize Across Generative Models [36.18427140427858]
We show that the existing paradigm, which consists of training a deep network for real-vs-fake classification, fails to detect fake images from newer breeds of generative models.
We propose to perform real-vs-fake classification without learning, using a feature space not explicitly trained to distinguish real from fake images.
arXiv Detail & Related papers (2023-02-20T18:59:04Z) - Deep Convolutional Pooling Transformer for Deepfake Detection [54.10864860009834]
We propose a deep convolutional Transformer to incorporate decisive image features both locally and globally.
Specifically, we apply convolutional pooling and re-attention to enrich the extracted features and enhance efficacy.
The proposed solution consistently outperforms several state-of-the-art baselines on both within- and cross-dataset experiments.
arXiv Detail & Related papers (2022-09-12T15:05:41Z) - Deepfake Network Architecture Attribution [23.375381198124014]
Existing works on fake image attribution perform multi-class classification on several Generative Adversarial Network (GAN) models.
We present the first study on textitDeepfake Network Architecture Attribution to attribute fake images on architecture-level.
arXiv Detail & Related papers (2022-02-28T14:54:30Z) - DA-FDFtNet: Dual Attention Fake Detection Fine-tuning Network to Detect
Various AI-Generated Fake Images [21.030153777110026]
It has been much easier to create fake images such as "Deepfakes"
Recent research has introduced few-shot learning, which uses a small amount of training data to produce fake images and videos more effectively.
In this work, we propose Dual Attention Fine-tuning Network (DA-tNet) to detect the manipulated fake face images.
arXiv Detail & Related papers (2021-12-22T16:25:24Z) - Beyond the Spectrum: Detecting Deepfakes via Re-Synthesis [69.09526348527203]
Deep generative models have led to highly realistic media, known as deepfakes, that are commonly indistinguishable from real to human eyes.
We propose a novel fake detection that is designed to re-synthesize testing images and extract visual cues for detection.
We demonstrate the improved effectiveness, cross-GAN generalization, and robustness against perturbations of our approach in a variety of detection scenarios.
arXiv Detail & Related papers (2021-05-29T21:22:24Z) - M2TR: Multi-modal Multi-scale Transformers for Deepfake Detection [74.19291916812921]
forged images generated by Deepfake techniques pose a serious threat to the trustworthiness of digital information.
In this paper, we aim to capture the subtle manipulation artifacts at different scales for Deepfake detection.
We introduce a high-quality Deepfake dataset, SR-DF, which consists of 4,000 DeepFake videos generated by state-of-the-art face swapping and facial reenactment methods.
arXiv Detail & Related papers (2021-04-20T05:43:44Z) - Multi-attentional Deepfake Detection [79.80308897734491]
Face forgery by deepfake is widely spread over the internet and has raised severe societal concerns.
We propose a new multi-attentional deepfake detection network. Specifically, it consists of three key components: 1) multiple spatial attention heads to make the network attend to different local parts; 2) textural feature enhancement block to zoom in the subtle artifacts in shallow features; 3) aggregate the low-level textural feature and high-level semantic features guided by the attention maps.
arXiv Detail & Related papers (2021-03-03T13:56:14Z) - What makes fake images detectable? Understanding properties that
generalize [55.4211069143719]
Deep networks can still pick up on subtle artifacts in doctored images.
We seek to understand what properties of fake images make them detectable.
We show a technique to exaggerate these detectable properties.
arXiv Detail & Related papers (2020-08-24T17:50:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.