Uncertainty-Aware Physically-Guided Proxy Tasks for Unseen Domain Face
Anti-spoofing
- URL: http://arxiv.org/abs/2011.14054v1
- Date: Sat, 28 Nov 2020 03:22:26 GMT
- Title: Uncertainty-Aware Physically-Guided Proxy Tasks for Unseen Domain Face
Anti-spoofing
- Authors: Junru Wu, Xiang Yu, Buyu Liu, Zhangyang Wang, Manmohan Chandraker
- Abstract summary: Face anti-spoofing (FAS) seeks to discriminate genuine faces from fake ones arising from any type of spoofing attack.
We propose to leverage physical cues to attain better generalization on unseen domains.
- Score: 128.32381246318954
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face anti-spoofing (FAS) seeks to discriminate genuine faces from fake ones
arising from any type of spoofing attack. Due to the wide varieties of attacks,
it is implausible to obtain training data that spans all attack types. We
propose to leverage physical cues to attain better generalization on unseen
domains. As a specific demonstration, we use physically guided proxy cues such
as depth, reflection, and material to complement our main anti-spoofing (a.k.a
liveness detection) task, with the intuition that genuine faces across domains
have consistent face-like geometry, minimal reflection, and skin material. We
introduce a novel uncertainty-aware attention scheme that independently learns
to weigh the relative contributions of the main and proxy tasks, preventing the
over-confident issue with traditional attention modules. Further, we propose
attribute-assisted hard negative mining to disentangle liveness-irrelevant
features with liveness features during learning. We evaluate extensively on
public benchmarks with intra-dataset and inter-dataset protocols. Our method
achieves the superior performance especially in unseen domain generalization
for FAS.
Related papers
- UniForensics: Face Forgery Detection via General Facial Representation [60.5421627990707]
High-level semantic features are less susceptible to perturbations and not limited to forgery-specific artifacts, thus having stronger generalization.
We introduce UniForensics, a novel deepfake detection framework that leverages a transformer-based video network, with a meta-functional face classification for enriched facial representation.
arXiv Detail & Related papers (2024-07-26T20:51:54Z) - Imperceptible Face Forgery Attack via Adversarial Semantic Mask [59.23247545399068]
We propose an Adversarial Semantic Mask Attack framework (ASMA) which can generate adversarial examples with good transferability and invisibility.
Specifically, we propose a novel adversarial semantic mask generative model, which can constrain generated perturbations in local semantic regions for good stealthiness.
arXiv Detail & Related papers (2024-06-16T10:38:11Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - A Closer Look at Geometric Temporal Dynamics for Face Anti-Spoofing [13.725319422213623]
Face anti-spoofing (FAS) is indispensable for a face recognition system.
We propose Geometry-Aware Interaction Network (GAIN) to distinguish between normal and abnormal movements of live and spoof presentations.
Our approach achieves state-of-the-art performance in the standard intra- and cross-dataset evaluations.
arXiv Detail & Related papers (2023-06-25T18:59:52Z) - Detecting Adversarial Faces Using Only Real Face Self-Perturbations [36.26178169550577]
Adrial attacks aim to disturb the functionality of a target system by adding specific noise to the input samples.
Existing defense techniques achieve high accuracy in detecting some specific adversarial faces (adv-faces)
New attack methods especially GAN-based attacks with completely different noise patterns circumvent them and reach a higher attack success rate.
arXiv Detail & Related papers (2023-04-22T09:55:48Z) - Learning Facial Liveness Representation for Domain Generalized Face
Anti-spoofing [25.07432145233952]
Face anti-spoofing (FAS) aims at distinguishing face spoof attacks from the authentic ones.
It is not practical to assume that the type of spoof attacks would be known in advance.
We propose a deep learning model for addressing the aforementioned domain-generalized face anti-spoofing task.
arXiv Detail & Related papers (2022-08-16T16:13:24Z) - Exposing Fine-Grained Adversarial Vulnerability of Face Anti-Spoofing
Models [13.057451851710924]
Face anti-spoofing aims to discriminate the spoofing face images (e.g., printed photos) from live ones.
Previous works conducted adversarial attack methods to evaluate the face anti-spoofing performance.
We propose a novel framework to expose the fine-grained adversarial vulnerability of the face anti-spoofing models.
arXiv Detail & Related papers (2022-05-30T04:56:33Z) - Dual Contrastive Learning for General Face Forgery Detection [64.41970626226221]
We propose a novel face forgery detection framework, named Dual Contrastive Learning (DCL), which constructs positive and negative paired data.
To explore the essential discrepancies, Intra-Instance Contrastive Learning (Intra-ICL) is introduced to focus on the local content inconsistencies prevalent in the forged faces.
arXiv Detail & Related papers (2021-12-27T05:44:40Z) - Robust Physical-World Attacks on Face Recognition [52.403564953848544]
Face recognition has been greatly facilitated by the development of deep neural networks (DNNs)
Recent studies have shown that DNNs are very vulnerable to adversarial examples, raising serious concerns on the security of real-world face recognition.
We study sticker-based physical attacks on face recognition for better understanding its adversarial robustness.
arXiv Detail & Related papers (2021-09-20T06:49:52Z) - Adversarial Robustness with Non-uniform Perturbations [3.804240190982695]
Prior work mainly focus on crafting adversarial examples with small uniform norm-bounded perturbations across features to maintain the requirement of imperceptibility.
Our approach can be adapted to other domains where non-uniform perturbations more accurately represent realistic adversarial examples.
arXiv Detail & Related papers (2021-02-24T00:54:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.