X-Transfer: A Transfer Learning-Based Framework for GAN-Generated Fake
Image Detection
- URL: http://arxiv.org/abs/2310.04639v2
- Date: Tue, 30 Jan 2024 05:18:06 GMT
- Title: X-Transfer: A Transfer Learning-Based Framework for GAN-Generated Fake
Image Detection
- Authors: Lei Zhang, Hao Chen, Shu Hu, Bin Zhu, Ching Sheng Lin, Xi Wu, Jinrong
Hu, Xin Wang
- Abstract summary: misuse of GANs for generating deceptive images, such as face replacement, raises significant security concerns.
This paper introduces a novel GAN-generated image detection algorithm called X-Transfer.
It enhances transfer learning by utilizing two neural networks that employ interleaved parallel gradient transmission.
- Score: 33.31312811230408
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) have remarkably advanced in diverse
domains, especially image generation and editing. However, the misuse of GANs
for generating deceptive images, such as face replacement, raises significant
security concerns, which have gained widespread attention. Therefore, it is
urgent to develop effective detection methods to distinguish between real and
fake images. Current research centers around the application of transfer
learning. Nevertheless, it encounters challenges such as knowledge forgetting
from the original dataset and inadequate performance when dealing with
imbalanced data during training. To alleviate this issue, this paper introduces
a novel GAN-generated image detection algorithm called X-Transfer, which
enhances transfer learning by utilizing two neural networks that employ
interleaved parallel gradient transmission. In addition, we combine AUC loss
and cross-entropy loss to improve the model's performance. We carry out
comprehensive experiments on multiple facial image datasets. The results show
that our model outperforms the general transferring approach, and the best
metric achieves 99.04%, which is increased by approximately 10%. Furthermore,
we demonstrate excellent performance on non-face datasets, validating its
generality and broader application prospects.
Related papers
- Large-Scale Data-Free Knowledge Distillation for ImageNet via Multi-Resolution Data Generation [53.95204595640208]
Data-Free Knowledge Distillation (DFKD) is an advanced technique that enables knowledge transfer from a teacher model to a student model without relying on original training data.
Previous approaches have generated synthetic images at high resolutions without leveraging information from real images.
MUSE generates images at lower resolutions while using Class Activation Maps (CAMs) to ensure that the generated images retain critical, class-specific features.
arXiv Detail & Related papers (2024-11-26T02:23:31Z) - DA-HFNet: Progressive Fine-Grained Forgery Image Detection and Localization Based on Dual Attention [12.36906630199689]
We construct a DA-HFNet forged image dataset guided by text or image-assisted GAN and Diffusion model.
Our goal is to utilize a hierarchical progressive network to capture forged artifacts at different scales for detection and localization.
arXiv Detail & Related papers (2024-06-03T16:13:33Z) - Deep Domain Adaptation: A Sim2Real Neural Approach for Improving Eye-Tracking Systems [80.62854148838359]
Eye image segmentation is a critical step in eye tracking that has great influence over the final gaze estimate.
We use dimensionality-reduction techniques to measure the overlap between the target eye images and synthetic training data.
Our methods result in robust, improved performance when tackling the discrepancy between simulation and real-world data samples.
arXiv Detail & Related papers (2024-03-23T22:32:06Z) - GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - Robust face anti-spoofing framework with Convolutional Vision
Transformer [1.7596501992526474]
This study proposes a convolutional vision transformer-based framework that achieves robust performance for various unseen domain data.
It also shows the highest average rank in sub-protocols of cross-dataset setting over the other nine benchmark models for domain generalization.
arXiv Detail & Related papers (2023-07-24T00:03:09Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - Data Instance Prior for Transfer Learning in GANs [25.062518859107946]
We propose a novel transfer learning method for GANs in the limited data domain.
We show that the proposed method effectively transfers knowledge to domains with few target images.
We also show the utility of data instance prior in large-scale unconditional image generation and image editing tasks.
arXiv Detail & Related papers (2020-12-08T07:40:30Z) - T-GD: Transferable GAN-generated Images Detection Framework [16.725880610265378]
We present the Transferable GAN-images Detection framework T-GD.
T-GD is composed of a teacher and a student model that can iteratively teach and evaluate each other to improve the detection performance.
To train the student model, we inject noise by mixing up the source and target datasets, while constraining the weight variation to preserve the starting point.
arXiv Detail & Related papers (2020-08-10T13:20:19Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.