Face Anti-Spoofing by Learning Polarization Cues in a Real-World
Scenario
- URL: http://arxiv.org/abs/2003.08024v3
- Date: Thu, 16 Jun 2022 12:53:28 GMT
- Title: Face Anti-Spoofing by Learning Polarization Cues in a Real-World
Scenario
- Authors: Yu Tian, Kunbo Zhang, Leyuan Wang, Zhenan Sun
- Abstract summary: Face anti-spoofing is the key to preventing security breaches in biometric recognition applications.
Deep learning method using RGB and infrared images demands a large amount of training data for new attacks.
We present a face anti-spoofing method in a real-world scenario by automatic learning the physical characteristics in polarization images of a real face.
- Score: 50.36920272392624
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face anti-spoofing is the key to preventing security breaches in biometric
recognition applications. Existing software-based and hardware-based face
liveness detection methods are effective in constrained environments or
designated datasets only. Deep learning method using RGB and infrared images
demands a large amount of training data for new attacks. In this paper, we
present a face anti-spoofing method in a real-world scenario by automatic
learning the physical characteristics in polarization images of a real face
compared to a deceptive attack. A computational framework is developed to
extract and classify the unique face features using convolutional neural
networks and SVM together. Our real-time polarized face anti-spoofing (PAAS)
detection method uses a on-chip integrated polarization imaging sensor with
optimized processing algorithms. Extensive experiments demonstrate the
advantages of the PAAS technique to counter diverse face spoofing attacks
(print, replay, mask) in uncontrolled indoor and outdoor conditions by learning
polarized face images of 33 people. A four-directional polarized face image
dataset is released to inspire future applications within biometric
anti-spoofing field.
Related papers
- CLIPC8: Face liveness detection algorithm based on image-text pairs and
contrastive learning [3.90443799528247]
We propose a face liveness detection method based on image-text pairs and contrastive learning.
The proposed method is capable of effectively detecting specific liveness attack behaviors in certain scenarios.
It is also effective in detecting traditional liveness attack methods, such as printing photo attacks and screen remake attacks.
arXiv Detail & Related papers (2023-11-29T12:21:42Z) - Exploring Decision-based Black-box Attacks on Face Forgery Detection [53.181920529225906]
Face forgery generation technologies generate vivid faces, which have raised public concerns about security and privacy.
Although face forgery detection has successfully distinguished fake faces, recent studies have demonstrated that face forgery detectors are very vulnerable to adversarial examples.
arXiv Detail & Related papers (2023-10-18T14:49:54Z) - Building an Invisible Shield for Your Portrait against Deepfakes [34.65356811439098]
We propose a novel framework - Integrity Encryptor, aiming to protect portraits in a proactive strategy.
Our methodology involves covertly encoding messages that are closely associated with key facial attributes into authentic images.
The modified facial attributes serve as a mean of detecting manipulated images through a comparison of the decoded messages.
arXiv Detail & Related papers (2023-05-22T10:01:28Z) - Information-containing Adversarial Perturbation for Combating Facial
Manipulation Systems [19.259372985094235]
Malicious applications of deep learning systems pose a serious threat to individuals' privacy and reputation.
We propose a novel two-tier protection method named Information-containing Adversarial Perturbation (IAP)
We use an encoder to map a facial image and its identity message to a cross-model adversarial example which can disrupt multiple facial manipulation systems.
arXiv Detail & Related papers (2023-03-21T06:48:14Z) - Reliable Face Morphing Attack Detection in On-The-Fly Border Control
Scenario with Variation in Image Resolution and Capture Distance [3.6833521970861685]
Face morphing attacks are highly potential in deceiving automatic FRS and human observers.
We present a novel Differential-MAD (D-MAD) algorithm based on the spherical distances and hierarchical fusion of deep features.
Experiments are carried out on the newly generated face morphing dataset (SCFace-Morph) based on the publicly available SCFace dataset.
arXiv Detail & Related papers (2022-09-30T13:58:43Z) - Restricted Black-box Adversarial Attack Against DeepFake Face Swapping [70.82017781235535]
We introduce a practical adversarial attack that does not require any queries to the facial image forgery model.
Our method is built on a substitute model persuing for face reconstruction and then transfers adversarial examples from the substitute model directly to inaccessible black-box DeepFake models.
arXiv Detail & Related papers (2022-04-26T14:36:06Z) - End2End Occluded Face Recognition by Masking Corrupted Features [82.27588990277192]
State-of-the-art general face recognition models do not generalize well to occluded face images.
This paper presents a novel face recognition method that is robust to occlusions based on a single end-to-end deep neural network.
Our approach, named FROM (Face Recognition with Occlusion Masks), learns to discover the corrupted features from the deep convolutional neural networks, and clean them by the dynamically learned masks.
arXiv Detail & Related papers (2021-08-21T09:08:41Z) - Aurora Guard: Reliable Face Anti-Spoofing via Mobile Lighting System [103.5604680001633]
Anti-spoofing against high-resolution rendering replay of paper photos or digital videos remains an open problem.
We propose a simple yet effective face anti-spoofing system, termed Aurora Guard (AG)
arXiv Detail & Related papers (2021-02-01T09:17:18Z) - Deep Spatial Gradient and Temporal Depth Learning for Face Anti-spoofing [61.82466976737915]
Depth supervised learning has been proven as one of the most effective methods for face anti-spoofing.
We propose a new approach to detect presentation attacks from multiple frames based on two insights.
The proposed approach achieves state-of-the-art results on five benchmark datasets.
arXiv Detail & Related papers (2020-03-18T06:11:20Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.