Deep Composite Face Image Attacks: Generation, Vulnerability and
Detection
- URL: http://arxiv.org/abs/2211.11039v3
- Date: Mon, 20 Mar 2023 20:06:42 GMT
- Title: Deep Composite Face Image Attacks: Generation, Vulnerability and
Detection
- Authors: Jag Mohan Singh, Raghavendra Ramachandra
- Abstract summary: Face manipulation attacks have drawn the attention of biometric researchers because of their vulnerability to Face Recognition Systems (FRS)
This paper proposes a novel scheme to generate Composite Face Image Attacks (CFIA) based on facial attributes using Geneversarative Adrial Networks (GANs)
- Score: 3.6833521970861685
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face manipulation attacks have drawn the attention of biometric researchers
because of their vulnerability to Face Recognition Systems (FRS). This paper
proposes a novel scheme to generate Composite Face Image Attacks (CFIA) based
on facial attributes using Generative Adversarial Networks (GANs). Given the
face images corresponding to two unique data subjects, the proposed CFIA method
will independently generate the segmented facial attributes, then blend them
using transparent masks to generate the CFIA samples. We generate $526$ unique
CFIA combinations of facial attributes for each pair of contributory data
subjects. Extensive experiments are carried out on our newly generated CFIA
dataset consisting of 1000 unique identities with 2000 bona fide samples and
526000 CFIA samples, thus resulting in an overall 528000 face image samples.
{{We present a sequence of experiments to benchmark the attack potential of
CFIA samples using four different automatic FRS}}. We introduced a new metric
named Generalized Morphing Attack Potential (G-MAP) to benchmark the
vulnerability of generated attacks on FRS effectively. Additional experiments
are performed on the representative subset of the CFIA dataset to benchmark
both perceptual quality and human observer response. Finally, the CFIA
detection performance is benchmarked using three different single image based
face Morphing Attack Detection (MAD) algorithms. The source code of the
proposed method together with CFIA dataset will be made publicly available:
\url{https://github.com/jagmohaniiit/LatentCompositionCode}
Related papers
- SynMorph: Generating Synthetic Face Morphing Dataset with Mated Samples [13.21801650767302]
We propose a new method to generate a synthetic face morphing dataset with 2450 identities and more than 100k morphs.
The proposed synthetic face morphing dataset is unique for its high-quality samples, different types of morphing algorithms, and the generalization for both single and differential morphing attack detection algorithms.
arXiv Detail & Related papers (2024-09-09T13:29:53Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Joint Physical-Digital Facial Attack Detection Via Simulating Spoofing Clues [17.132170955620047]
We propose an innovative approach to jointly detect physical and digital attacks within a single model.
Our approach mainly contains two types of data augmentation, which we call Simulated Physical Spoofing Clues augmentation (SPSC) and Simulated Digital Spoofing Clues augmentation (SDSC)
Our method won first place in "Unified Physical-Digital Face Attack Detection" of the 5th Face Anti-spoofing Challenge@CVPR2024.
arXiv Detail & Related papers (2024-04-12T13:01:22Z) - Breaking Free: How to Hack Safety Guardrails in Black-Box Diffusion Models! [52.0855711767075]
EvoSeed is an evolutionary strategy-based algorithmic framework for generating photo-realistic natural adversarial samples.
We employ CMA-ES to optimize the search for an initial seed vector, which, when processed by the Conditional Diffusion Model, results in the natural adversarial sample misclassified by the Model.
Experiments show that generated adversarial images are of high image quality, raising concerns about generating harmful content bypassing safety classifiers.
arXiv Detail & Related papers (2024-02-07T09:39:29Z) - Presentation Attack detection using Wavelet Transform and Deep Residual
Neural Net [5.425986555749844]
Biometric substances can be deceived by the imposters in several ways.
The bio-metric images, especially the iris and face, are vulnerable to different presentation attacks.
This research applies deep learning approaches to mitigate presentation attacks in a biometric access control system.
arXiv Detail & Related papers (2023-11-23T20:21:49Z) - An Open Patch Generator based Fingerprint Presentation Attack Detection
using Generative Adversarial Network [3.5558308387389626]
Presentation Attack (PA) or spoofing is one of the threats caused by presenting a spoof of a genuine fingerprint to the sensor of Automatic Fingerprint Recognition Systems (AFRS)
This paper proposes a CNN based technique that uses a Generative Adversarial Network (GAN) to augment the dataset with spoof samples generated from the proposed Open Patch Generator (OPG)
An overall accuracy of 96.20%, 94.97%, and 92.90% has been achieved on the LivDet 2015, 2017, and 2019 databases, respectively under the LivDet protocol scenarios.
arXiv Detail & Related papers (2023-06-06T10:52:06Z) - Attribute-Guided Encryption with Facial Texture Masking [64.77548539959501]
We propose Attribute Guided Encryption with Facial Texture Masking to protect users from unauthorized facial recognition systems.
Our proposed method produces more natural-looking encrypted images than state-of-the-art methods.
arXiv Detail & Related papers (2023-05-22T23:50:43Z) - Wild Face Anti-Spoofing Challenge 2023: Benchmark and Results [73.98594459933008]
Face anti-spoofing (FAS) is an essential mechanism for safeguarding the integrity of automated face recognition systems.
This limitation can be attributed to the scarcity and lack of diversity in publicly available FAS datasets.
We introduce the Wild Face Anti-Spoofing dataset, a large-scale, diverse FAS dataset collected in unconstrained settings.
arXiv Detail & Related papers (2023-04-12T10:29:42Z) - Surveillance Face Anti-spoofing [81.50018853811895]
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks.
We propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality.
A large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
arXiv Detail & Related papers (2023-01-03T07:09:57Z) - Blind Face Restoration: Benchmark Datasets and a Baseline Model [63.053331687284064]
Blind Face Restoration (BFR) aims to construct a high-quality (HQ) face image from its corresponding low-quality (LQ) input.
We first synthesize two blind face restoration benchmark datasets called EDFace-Celeb-1M (BFR128) and EDFace-Celeb-150K (BFR512)
State-of-the-art methods are benchmarked on them under five settings including blur, noise, low resolution, JPEG compression artifacts, and the combination of them (full degradation)
arXiv Detail & Related papers (2022-06-08T06:34:24Z) - MIPGAN -- Generating Strong and High Quality Morphing Attacks Using
Identity Prior Driven GAN [22.220940043294334]
We present a new approach for generating strong attacks using an Identity Prior Driven Generative Adversarial Network.
The proposed MIPGAN is derived from the StyleGAN with a newly formulated loss function exploiting perceptual quality and identity factor.
We demonstrate the proposed approach's applicability to generate strong morphing attacks by evaluating its vulnerability against both commercial and deep learning based Face Recognition System.
arXiv Detail & Related papers (2020-09-03T15:08:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.