Robust Attentive Deep Neural Network for Exposing GAN-generated Faces
- URL: http://arxiv.org/abs/2109.02167v1
- Date: Sun, 5 Sep 2021 21:22:39 GMT
- Title: Robust Attentive Deep Neural Network for Exposing GAN-generated Faces
- Authors: Hui Guo, Shu Hu, Xin Wang, Ming-Ching Chang, Siwei Lyu
- Abstract summary: We propose a robust, attentive, end-to-end network that can spot GAN-generated faces by analyzing their eye inconsistencies.
Our deep network addresses the imbalance learning issues by considering the AUC loss and the traditional cross-entropy loss jointly.
- Score: 40.15016121723183
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: GAN-based techniques that generate and synthesize realistic faces have caused
severe social concerns and security problems. Existing methods for detecting
GAN-generated faces can perform well on limited public datasets. However,
images from existing public datasets do not represent real-world scenarios well
enough in terms of view variations and data distributions (where real faces
largely outnumber synthetic faces). The state-of-the-art methods do not
generalize well in real-world problems and lack the interpretability of
detection results. Performance of existing GAN-face detection models degrades
significantly when facing imbalanced data distributions. To address these
shortcomings, we propose a robust, attentive, end-to-end network that can spot
GAN-generated faces by analyzing their eye inconsistencies. Specifically, our
model learns to identify inconsistent eye components by localizing and
comparing the iris artifacts between the two eyes automatically. Our deep
network addresses the imbalance learning issues by considering the AUC loss and
the traditional cross-entropy loss jointly. Comprehensive evaluations of the
FFHQ dataset in terms of both balanced and imbalanced scenarios demonstrate the
superiority of the proposed method.
Related papers
- Towards Fair and Robust Face Parsing for Generative AI: A Multi-Objective Approach [10.00430939898858]
We propose a multi-objective learning framework that optimize accuracy, fairness, and robustness in face parsing.
Our results show that fairness-aware and robust segmentation improves photorealism and consistency in face generation.
Our findings demonstrate that multi-objective face parsing improves demographic consistency and robustness, leading to higher-quality GAN-based synthesis.
arXiv Detail & Related papers (2025-02-06T00:41:35Z) - Fairer Analysis and Demographically Balanced Face Generation for Fairer Face Verification [69.04239222633795]
Face recognition and verification are two computer vision tasks whose performances have advanced with the introduction of deep representations.
Ethical, legal, and technical challenges due to the sensitive nature of face data and biases in real-world training datasets hinder their development.
We introduce a new controlled generation pipeline that improves fairness.
arXiv Detail & Related papers (2024-12-04T14:30:19Z) - Analyzing the Effect of Combined Degradations on Face Recognition [0.0]
We analyze the impact of single and combined degradations using a real-world degradation pipeline extended with under/over-exposure conditions.
Results reveal that single and combined degradations show dissimilar model behavior.
This work emphasizes the importance of accounting for real-world complexity to assess the robustness of face recognition models in real-world settings.
arXiv Detail & Related papers (2024-06-04T09:29:59Z) - Generalized Face Liveness Detection via De-fake Face Generator [52.23271636362843]
Previous Face Anti-spoofing (FAS) methods face the challenge of generalizing to unseen domains.
We propose an Anomalous cue Guided FAS (AG-FAS) method, which can effectively leverage large-scale additional real faces.
Our method achieves state-of-the-art results under cross-domain evaluations with unseen scenarios and unknown presentation attacks.
arXiv Detail & Related papers (2024-01-17T06:59:32Z) - GANDiffFace: Controllable Generation of Synthetic Datasets for Face
Recognition with Realistic Variations [2.7467281625529134]
This study introduces GANDiffFace, a novel framework for the generation of synthetic datasets for face recognition.
GANDiffFace combines the power of Generative Adversarial Networks (GANs) and Diffusion models to overcome the limitations of existing synthetic datasets.
arXiv Detail & Related papers (2023-05-31T15:49:12Z) - On Recognizing Occluded Faces in the Wild [10.420394952839242]
We present the Real World Occluded Faces dataset.
This dataset contains faces with both upper face.
occluded, due to sunglasses, and lower face.
occluded, due to masks.
It is observed that the performance drop is far less when the models are tested on synthetically generated occluded faces.
arXiv Detail & Related papers (2021-09-08T14:20:10Z) - Heterogeneous Face Frontalization via Domain Agnostic Learning [74.86585699909459]
We propose a domain agnostic learning-based generative adversarial network (DAL-GAN) which can synthesize frontal views in the visible domain from thermal faces with pose variations.
DAL-GAN consists of a generator with an auxiliary classifier and two discriminators which capture both local and global texture discriminations for better synthesis.
arXiv Detail & Related papers (2021-07-17T20:41:41Z) - 3D Dense Geometry-Guided Facial Expression Synthesis by Adversarial
Learning [54.24887282693925]
We propose a novel framework to exploit 3D dense (depth and surface normals) information for expression manipulation.
We use an off-the-shelf state-of-the-art 3D reconstruction model to estimate the depth and create a large-scale RGB-Depth dataset.
Our experiments demonstrate that the proposed method outperforms the competitive baseline and existing arts by a large margin.
arXiv Detail & Related papers (2020-09-30T17:12:35Z) - InterFaceGAN: Interpreting the Disentangled Face Representation Learned
by GANs [73.27299786083424]
We propose a framework called InterFaceGAN to interpret the disentangled face representation learned by state-of-the-art GAN models.
We first find that GANs learn various semantics in some linear subspaces of the latent space.
We then conduct a detailed study on the correlation between different semantics and manage to better disentangle them via subspace projection.
arXiv Detail & Related papers (2020-05-18T18:01:22Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.