X-Transfer: A Transfer Learning-Based Framework for GAN-Generated Fake
Image Detection
- URL: http://arxiv.org/abs/2310.04639v2
- Date: Tue, 30 Jan 2024 05:18:06 GMT
- Title: X-Transfer: A Transfer Learning-Based Framework for GAN-Generated Fake
Image Detection
- Authors: Lei Zhang, Hao Chen, Shu Hu, Bin Zhu, Ching Sheng Lin, Xi Wu, Jinrong
Hu, Xin Wang
- Abstract summary: misuse of GANs for generating deceptive images, such as face replacement, raises significant security concerns.
This paper introduces a novel GAN-generated image detection algorithm called X-Transfer.
It enhances transfer learning by utilizing two neural networks that employ interleaved parallel gradient transmission.
- Score: 33.31312811230408
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Generative adversarial networks (GANs) have remarkably advanced in diverse
domains, especially image generation and editing. However, the misuse of GANs
for generating deceptive images, such as face replacement, raises significant
security concerns, which have gained widespread attention. Therefore, it is
urgent to develop effective detection methods to distinguish between real and
fake images. Current research centers around the application of transfer
learning. Nevertheless, it encounters challenges such as knowledge forgetting
from the original dataset and inadequate performance when dealing with
imbalanced data during training. To alleviate this issue, this paper introduces
a novel GAN-generated image detection algorithm called X-Transfer, which
enhances transfer learning by utilizing two neural networks that employ
interleaved parallel gradient transmission. In addition, we combine AUC loss
and cross-entropy loss to improve the model's performance. We carry out
comprehensive experiments on multiple facial image datasets. The results show
that our model outperforms the general transferring approach, and the best
metric achieves 99.04%, which is increased by approximately 10%. Furthermore,
we demonstrate excellent performance on non-face datasets, validating its
generality and broader application prospects.
Related papers
- Adversarial Semantic Augmentation for Training Generative Adversarial Networks under Limited Data [27.27230943686822]
We propose an adversarial semantic augmentation (ASA) technique to enlarge the training data at the semantic level instead of the image level.
Our method consistently improve the synthesis quality under various data regimes.
arXiv Detail & Related papers (2025-02-02T13:50:38Z) - Semi-supervised Semantic Segmentation for Remote Sensing Images via Multi-scale Uncertainty Consistency and Cross-Teacher-Student Attention [59.19580789952102]
This paper proposes a novel semi-supervised Multi-Scale Uncertainty and Cross-Teacher-Student Attention (MUCA) model for RS image semantic segmentation tasks.
MUCA constrains the consistency among feature maps at different layers of the network by introducing a multi-scale uncertainty consistency regularization.
MUCA utilizes a Cross-Teacher-Student attention mechanism to guide the student network, guiding the student network to construct more discriminative feature representations.
arXiv Detail & Related papers (2025-01-18T11:57:20Z) - A Bias-Free Training Paradigm for More General AI-generated Image Detection [15.421102443599773]
A well-designed forensic detector should detect generator specific artifacts rather than reflect data biases.
We propose B-Free, a bias-free training paradigm, where fake images are generated from real ones.
We show significant improvements in both generalization and robustness over state-of-the-art detectors.
arXiv Detail & Related papers (2024-12-23T15:54:32Z) - Understanding and Improving Training-Free AI-Generated Image Detections with Vision Foundation Models [68.90917438865078]
Deepfake techniques for facial synthesis and editing pose serious risks for generative models.
In this paper, we investigate how detection performance varies across model backbones, types, and datasets.
We introduce Contrastive Blur, which enhances performance on facial images, and MINDER, which addresses noise type bias, balancing performance across domains.
arXiv Detail & Related papers (2024-11-28T13:04:45Z) - GenFace: A Large-Scale Fine-Grained Face Forgery Benchmark and Cross Appearance-Edge Learning [50.7702397913573]
The rapid advancement of photorealistic generators has reached a critical juncture where the discrepancy between authentic and manipulated images is increasingly indistinguishable.
Although there have been a number of publicly available face forgery datasets, the forgery faces are mostly generated using GAN-based synthesis technology.
We propose a large-scale, diverse, and fine-grained high-fidelity dataset, namely GenFace, to facilitate the advancement of deepfake detection.
arXiv Detail & Related papers (2024-02-03T03:13:50Z) - Performance of GAN-based augmentation for deep learning COVID-19 image
classification [57.1795052451257]
The biggest challenge in the application of deep learning to the medical domain is the availability of training data.
Data augmentation is a typical methodology used in machine learning when confronted with a limited data set.
In this work, a StyleGAN2-ADA model of Generative Adversarial Networks is trained on the limited COVID-19 chest X-ray image set.
arXiv Detail & Related papers (2023-04-18T15:39:58Z) - Unsupervised Domain Transfer with Conditional Invertible Neural Networks [83.90291882730925]
We propose a domain transfer approach based on conditional invertible neural networks (cINNs)
Our method inherently guarantees cycle consistency through its invertible architecture, and network training can efficiently be conducted with maximum likelihood.
Our method enables the generation of realistic spectral data and outperforms the state of the art on two downstream classification tasks.
arXiv Detail & Related papers (2023-03-17T18:00:27Z) - T-GD: Transferable GAN-generated Images Detection Framework [16.725880610265378]
We present the Transferable GAN-images Detection framework T-GD.
T-GD is composed of a teacher and a student model that can iteratively teach and evaluate each other to improve the detection performance.
To train the student model, we inject noise by mixing up the source and target datasets, while constraining the weight variation to preserve the starting point.
arXiv Detail & Related papers (2020-08-10T13:20:19Z) - Joint Deep Learning of Facial Expression Synthesis and Recognition [97.19528464266824]
We propose a novel joint deep learning of facial expression synthesis and recognition method for effective FER.
The proposed method involves a two-stage learning procedure. Firstly, a facial expression synthesis generative adversarial network (FESGAN) is pre-trained to generate facial images with different facial expressions.
In order to alleviate the problem of data bias between the real images and the synthetic images, we propose an intra-class loss with a novel real data-guided back-propagation (RDBP) algorithm.
arXiv Detail & Related papers (2020-02-06T10:56:00Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.