Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing
Models
- URL: http://arxiv.org/abs/2205.14851v3
- Date: Tue, 2 May 2023 03:03:29 GMT
- Title: Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing
Models
- Authors: Songlin Yang, Wei Wang, Chenye Xu, Ziwen He, Bo Peng, Jing Dong
- Abstract summary: Face anti-spoofing aims to discriminate the spoofing face images (e.g., printed photos) from live ones.
Previous works conducted adversarial attack methods to evaluate the face anti-spoofing performance.
We propose a novel framework to expose the fine-grained adversarial vulnerability of the face anti-spoofing models.
- Score: 13.057451851710924
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face anti-spoofing aims to discriminate the spoofing face images (e.g.,
printed photos) from live ones. However, adversarial examples greatly challenge
its credibility, where adding some perturbation noise can easily change the
predictions. Previous works conducted adversarial attack methods to evaluate
the face anti-spoofing performance without any fine-grained analysis that which
model architecture or auxiliary feature is vulnerable to the adversary. To
handle this problem, we propose a novel framework to expose the fine-grained
adversarial vulnerability of the face anti-spoofing models, which consists of a
multitask module and a semantic feature augmentation (SFA) module. The
multitask module can obtain different semantic features for further evaluation,
but only attacking these semantic features fails to reflect the
discrimination-related vulnerability. We then design the SFA module to
introduce the data distribution prior for more discrimination-related gradient
directions for generating adversarial examples. Comprehensive experiments show
that SFA module increases the attack success rate by nearly 40$\%$ on average.
We conduct this fine-grained adversarial analysis on different annotations,
geometric maps, and backbone networks (e.g., Resnet network). These
fine-grained adversarial examples can be used for selecting robust backbone
networks and auxiliary features. They also can be used for adversarial
training, which makes it practical to further improve the accuracy and
robustness of the face anti-spoofing models.
Related papers
- Protecting Feed-Forward Networks from Adversarial Attacks Using Predictive Coding [0.20718016474717196]
An adversarial example is a modified input image designed to cause a Machine Learning (ML) model to make a mistake.
This study presents a practical and effective solution -- using predictive coding networks (PCnets) as an auxiliary step for adversarial defence.
arXiv Detail & Related papers (2024-10-31T21:38:05Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - MirrorCheck: Efficient Adversarial Defense for Vision-Language Models [55.73581212134293]
We propose a novel, yet elegantly simple approach for detecting adversarial samples in Vision-Language Models.
Our method leverages Text-to-Image (T2I) models to generate images based on captions produced by target VLMs.
Empirical evaluations conducted on different datasets validate the efficacy of our approach.
arXiv Detail & Related papers (2024-06-13T15:55:04Z) - Adversarial Attacks on Both Face Recognition and Face Anti-spoofing Models [47.72177312801278]
Adrial attacks on Face Recognition (FR) systems have proven highly effective in compromising pure FR models.
We propose a novel setting of adversarially attacking both FR and Face Anti-Spoofing (FAS) models simultaneously.
We introduce a new attack method, namely Style-aligned Distribution Biasing (SDB), to improve the capacity of black-box attacks on both FR and FAS models.
arXiv Detail & Related papers (2024-05-27T08:30:29Z) - A Closer Look at Geometric Temporal Dynamics for Face Anti-Spoofing [13.725319422213623]
Face anti-spoofing (FAS) is indispensable for a face recognition system.
We propose Geometry-Aware Interaction Network (GAIN) to distinguish between normal and abnormal movements of live and spoof presentations.
Our approach achieves state-of-the-art performance in the standard intra- and cross-dataset evaluations.
arXiv Detail & Related papers (2023-06-25T18:59:52Z) - Improving the Transferability of Adversarial Attacks on Face Recognition
with Beneficial Perturbation Feature Augmentation [26.032639566914114]
Face recognition (FR) models can be easily fooled by adversarial examples, which are crafted by adding imperceptible perturbations on benign face images.
In this paper, we improve the transferability of adversarial face examples to expose more blind spots of existing FR models.
We propose a novel attack method called Beneficial Perturbation Feature Augmentation Attack (BPFA)
arXiv Detail & Related papers (2022-10-28T13:25:59Z) - Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning [54.15303628138665]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks.
Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance.
We propose Dual Spoof Disentanglement Generation framework to tackle this challenge by "anti-spoofing via generation"
arXiv Detail & Related papers (2021-12-01T15:36:59Z) - Towards Defending against Adversarial Examples via Attack-Invariant
Features [147.85346057241605]
Deep neural networks (DNNs) are vulnerable to adversarial noise.
adversarial robustness can be improved by exploiting adversarial examples.
Models trained on seen types of adversarial examples generally cannot generalize well to unseen types of adversarial examples.
arXiv Detail & Related papers (2021-06-09T12:49:54Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.