Fingerprint Presentation Attack Detection by Channel-wise Feature
Denoising
- URL: http://arxiv.org/abs/2111.07620v1
- Date: Mon, 15 Nov 2021 09:13:21 GMT
- Title: Fingerprint Presentation Attack Detection by Channel-wise Feature
Denoising
- Authors: Feng Liu, Zhe Kong, Haozhe Liu, Wentian Zhang, Linlin Shen
- Abstract summary: Fingerprint recognition systems (AFRSs) are vulnerable to malicious attacks.
Current Fingerprint Presentation Attack Detection methods often have poor robustness under new attack materials or sensor settings.
This paper proposes a novel Channel-wise Feature Denoising fingerprint PAD (CFD-PAD) method.
- Score: 18.933809452711163
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Due to the diversity of attack materials, fingerprint recognition systems
(AFRSs) are vulnerable to malicious attacks. It is of great importance to
propose effective Fingerprint Presentation Attack Detection (PAD) methods for
the safety and reliability of AFRSs. However, current PAD methods often have
poor robustness under new attack materials or sensor settings. This paper thus
proposes a novel Channel-wise Feature Denoising fingerprint PAD (CFD-PAD)
method by considering handling the redundant "noise" information which ignored
in previous works. The proposed method learned important features of
fingerprint images by weighting the importance of each channel and finding
those discriminative channels and "noise" channels. Then, the propagation of
"noise" channels is suppressed in the feature map to reduce interference.
Specifically, a PA-Adaption loss is designed to constrain the feature
distribution so as to make the feature distribution of live fingerprints more
aggregate and spoof fingerprints more disperse. Our experimental results
evaluated on LivDet 2017 showed that our proposed CFD-PAD can achieve 2.53% ACE
and 93.83% True Detection Rate when the False Detection Rate equals to 1.0%
(TDR@FDR=1%) and it outperforms the best single model based methods in terms of
ACE (2.53% vs. 4.56%) and TDR@FDR=1%(93.83% vs. 73.32\%) significantly, which
proves the effectiveness of the proposed method. Although we have achieved a
comparable result compared with the state-of-the-art multiple model based
method, there still achieves an increase of TDR@FDR=1% from 91.19% to 93.83% by
our method. Besides, our model is simpler, lighter and, more efficient and has
achieved a 74.76% reduction in time-consuming compared with the
state-of-the-art multiple model based method. Code will be publicly available.
Related papers
- A Multi-Modal Approach for Face Anti-Spoofing in Non-Calibrated Systems using Disparity Maps [0.6144680854063939]
Face recognition technologies are vulnerable to face spoofing attacks.
stereo-depth cameras can detect such attacks effectively, but their high-cost limits their widespread adoption.
We propose a method to overcome this challenge by leveraging facial attributes to derive disparity information.
arXiv Detail & Related papers (2024-10-31T15:29:51Z) - Lazy Layers to Make Fine-Tuned Diffusion Models More Traceable [70.77600345240867]
A novel arbitrary-in-arbitrary-out (AIAO) strategy makes watermarks resilient to fine-tuning-based removal.
Unlike the existing methods of designing a backdoor for the input/output space of diffusion models, in our method, we propose to embed the backdoor into the feature space of sampled subpaths.
Our empirical studies on the MS-COCO, AFHQ, LSUN, CUB-200, and DreamBooth datasets confirm the robustness of AIAO.
arXiv Detail & Related papers (2024-05-01T12:03:39Z) - ODDR: Outlier Detection & Dimension Reduction Based Defense Against Adversarial Patches [4.4100683691177816]
Adversarial attacks present a significant challenge to the dependable deployment of machine learning models.
We propose Outlier Detection and Dimension Reduction (ODDR), a comprehensive defense strategy to counteract patch-based adversarial attacks.
Our approach is based on the observation that input features corresponding to adversarial patches can be identified as outliers.
arXiv Detail & Related papers (2023-11-20T11:08:06Z) - DiffAttack: Evasion Attacks Against Diffusion-Based Adversarial
Purification [63.65630243675792]
Diffusion-based purification defenses leverage diffusion models to remove crafted perturbations of adversarial examples.
Recent studies show that even advanced attacks cannot break such defenses effectively.
We propose a unified framework DiffAttack to perform effective and efficient attacks against diffusion-based purification defenses.
arXiv Detail & Related papers (2023-10-27T15:17:50Z) - A Universal Anti-Spoofing Approach for Contactless Fingerprint Biometric
Systems [0.0]
We propose a universal presentation attack detection method for contactless fingerprints.
We generated synthetic contactless fingerprints using StyleGAN from live finger photos and integrating them to train a semi-supervised ResNet-18 model.
A novel joint loss function, combining the Arcface and Center loss, is introduced with a regularization to balance between the two loss functions.
arXiv Detail & Related papers (2023-10-23T15:46:47Z) - Diffusion Denoising Process for Perceptron Bias in Out-of-distribution
Detection [67.49587673594276]
We introduce a new perceptron bias assumption that suggests discriminator models are more sensitive to certain features of the input, leading to the overconfidence problem.
We demonstrate that the diffusion denoising process (DDP) of DMs serves as a novel form of asymmetric, which is well-suited to enhance the input and mitigate the overconfidence problem.
Our experiments on CIFAR10, CIFAR100, and ImageNet show that our method outperforms SOTA approaches.
arXiv Detail & Related papers (2022-11-21T08:45:08Z) - Guided Diffusion Model for Adversarial Purification [103.4596751105955]
Adversarial attacks disturb deep neural networks (DNNs) in various algorithms and frameworks.
We propose a novel purification approach, referred to as guided diffusion model for purification (GDMP)
On our comprehensive experiments across various datasets, the proposed GDMP is shown to reduce the perturbations raised by adversarial attacks to a shallow range.
arXiv Detail & Related papers (2022-05-30T10:11:15Z) - Diffusion Models for Adversarial Purification [69.1882221038846]
Adrial purification refers to a class of defense methods that remove adversarial perturbations using a generative model.
We propose DiffPure that uses diffusion models for adversarial purification.
Our method achieves the state-of-the-art results, outperforming current adversarial training and adversarial purification methods.
arXiv Detail & Related papers (2022-05-16T06:03:00Z) - Learning to Perform Downlink Channel Estimation in Massive MIMO Systems [72.76968022465469]
We study downlink (DL) channel estimation in a Massive multiple-input multiple-output (MIMO) system.
A common approach is to use the mean value as the estimate, motivated by channel hardening.
We propose two novel estimation methods.
arXiv Detail & Related papers (2021-09-06T13:42:32Z) - Cross Modal Focal Loss for RGBD Face Anti-Spoofing [4.36572039512405]
We present a new framework for presentation attack detection (PAD) that uses RGB and depth channels together with a novel loss function.
The new architecture uses complementary information from the two modalities while reducing the impact of overfitting.
arXiv Detail & Related papers (2021-03-01T12:22:44Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.