Concept Discovery in Deep Neural Networks for Explainable Face Anti-Spoofing
- URL: http://arxiv.org/abs/2412.17541v3
- Date: Sun, 05 Jan 2025 04:42:03 GMT
- Title: Concept Discovery in Deep Neural Networks for Explainable Face Anti-Spoofing
- Authors: Haoyuan Zhang, Xiangyu Zhu, Li Gao, Guoying Zhao, Zhen Lei,
- Abstract summary: Recent face anti-spoofing models can only tell people "this face is fake" while lacking the explanation to answer "why it is fake"
We propose SPED (SPoofing Evidence Discovery), an X-FAS method which can discover spoof concepts and provide reliable explanations on the basis of discovered concepts.
Experimental results demonstrate SPED's ability to generate reliable explanations.
- Score: 49.698180144825194
- License:
- Abstract: With the rapid growth usage of face recognition in people's daily life, face anti-spoofing becomes increasingly important to avoid malicious attacks. Recent face anti-spoofing models can reach a high classification accuracy on multiple datasets but these models can only tell people "this face is fake" while lacking the explanation to answer "why it is fake". Such a system undermines trustworthiness and causes user confusion, as it denies their requests without providing any explanations. In this paper, we incorporate XAI into face anti-spoofing and propose a new problem termed X-FAS (eXplainable Face Anti-Spoofing) empowering face anti-spoofing models to provide an explanation. We propose SPED (SPoofing Evidence Discovery), an X-FAS method which can discover spoof concepts and provide reliable explanations on the basis of discovered concepts. To evaluate the quality of X-FAS methods, we propose an X-FAS benchmark with annotated spoofing evidence by experts. We analyze SPED explanations on face anti-spoofing dataset and compare SPED quantitatively and qualitatively with previous XAI methods on proposed X-FAS benchmark. Experimental results demonstrate SPED's ability to generate reliable explanations.
Related papers
- Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing
Models [13.057451851710924]
Face anti-spoofing aims to discriminate the spoofing face images (e.g., printed photos) from live ones.
Previous works conducted adversarial attack methods to evaluate the face anti-spoofing performance.
We propose a novel framework to expose the fine-grained adversarial vulnerability of the face anti-spoofing models.
arXiv Detail & Related papers (2022-05-30T04:56:33Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning [54.15303628138665]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks.
Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance.
We propose Dual Spoof Disentanglement Generation framework to tackle this challenge by "anti-spoofing via generation"
arXiv Detail & Related papers (2021-12-01T15:36:59Z) - Aurora Guard: Reliable Face Anti-Spoofing via Mobile Lighting System [103.5604680001633]
Anti-spoofing against high-resolution rendering replay of paper photos or digital videos remains an open problem.
We propose a simple yet effective face anti-spoofing system, termed Aurora Guard (AG)
arXiv Detail & Related papers (2021-02-01T09:17:18Z) - On Disentangling Spoof Trace for Generic Face Anti-Spoofing [24.75975874643976]
Key to face anti-spoofing lies in subtle image pattern, termed "spoof trace"
This work designs a novel adversarial learning framework to disentangle the spoof traces from input faces.
arXiv Detail & Related papers (2020-07-17T23:14:16Z) - Look Locally Infer Globally: A Generalizable Face Anti-Spoofing Approach [53.86588268914105]
State-of-the-art spoof detection methods tend to overfit to the spoof types seen during training and fail to generalize to unknown spoof types.
We propose Self-Supervised Regional Fully Convolutional Network (SSR-FCN) that is trained to learn local discriminative cues from a face image in a self-supervised manner.
arXiv Detail & Related papers (2020-06-04T13:11:17Z) - Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing [61.82466976737915]
Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing.
We propose a new approach to detect presentation attacks from multiple frames based on two insights.
The proposed approach achieves state-of-the-art results on five benchmark datasets.
arXiv Detail & Related papers (2020-03-18T06:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.