Cross Modal Focal Loss for RGBD Face Anti-Spoofing
- URL: http://arxiv.org/abs/2103.00948v1
- Date: Mon, 1 Mar 2021 12:22:44 GMT
- Title: Cross Modal Focal Loss for RGBD Face Anti-Spoofing
- Authors: Anjith George and Sebastien Marcel
- Abstract summary: We present a new framework for presentation attack detection (PAD) that uses RGB and depth channels together with a novel loss function.
The new architecture uses complementary information from the two modalities while reducing the impact of overfitting.
- Score: 4.36572039512405
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Automatic methods for detecting presentation attacks are essential to ensure
the reliable use of facial recognition technology. Most of the methods
available in the literature for presentation attack detection (PAD) fails in
generalizing to unseen attacks. In recent years, multi-channel methods have
been proposed to improve the robustness of PAD systems. Often, only a limited
amount of data is available for additional channels, which limits the
effectiveness of these methods. In this work, we present a new framework for
PAD that uses RGB and depth channels together with a novel loss function. The
new architecture uses complementary information from the two modalities while
reducing the impact of overfitting. Essentially, a cross-modal focal loss
function is proposed to modulate the loss contribution of each channel as a
function of the confidence of individual channels. Extensive evaluations in two
publicly available datasets demonstrate the effectiveness of the proposed
approach.
Related papers
- Cross-Modality Perturbation Synergy Attack for Person Re-identification [66.48494594909123]
The main challenge in cross-modality ReID lies in effectively dealing with visual differences between different modalities.
Existing attack methods have primarily focused on the characteristics of the visible image modality.
This study proposes a universal perturbation attack specifically designed for cross-modality ReID.
arXiv Detail & Related papers (2024-01-18T15:56:23Z) - Small Object Detection via Coarse-to-fine Proposal Generation and
Imitation Learning [52.06176253457522]
We propose a two-stage framework tailored for small object detection based on the Coarse-to-fine pipeline and Feature Imitation learning.
CFINet achieves state-of-the-art performance on the large-scale small object detection benchmarks, SODA-D and SODA-A.
arXiv Detail & Related papers (2023-08-18T13:13:09Z) - Learning from Multi-Perception Features for Real-Word Image
Super-resolution [87.71135803794519]
We propose a novel SR method called MPF-Net that leverages multiple perceptual features of input images.
Our method incorporates a Multi-Perception Feature Extraction (MPFE) module to extract diverse perceptual information.
We also introduce a contrastive regularization term (CR) that improves the model's learning capability.
arXiv Detail & Related papers (2023-05-26T07:35:49Z) - Efficient Deep Unfolding for SISO-OFDM Channel Estimation [0.0]
It is possible to perform SISO-OFDM channel estimation using sparse recovery techniques.
In this paper, an unfolded neural network is used to lighten this constraint.
Its unsupervised online learning allows to learn the system's imperfections in order to enhance the estimation performance.
arXiv Detail & Related papers (2022-10-11T11:29:54Z) - Disentangled Representation Learning for RF Fingerprint Extraction under
Unknown Channel Statistics [77.13542705329328]
We propose a framework of disentangled representation learning(DRL) that first learns to factor the input signals into a device-relevant component and a device-irrelevant component via adversarial learning.
The implicit data augmentation in the proposed framework imposes a regularization on the RFF extractor to avoid the possible overfitting of device-irrelevant channel statistics.
Experiments validate that the proposed approach, referred to as DR-RFF, outperforms conventional methods in terms of generalizability to unknown complicated propagation environments.
arXiv Detail & Related papers (2022-08-04T15:46:48Z) - Fingerprint Presentation Attack Detection by Channel-wise Feature
Denoising [18.933809452711163]
Fingerprint recognition systems (AFRSs) are vulnerable to malicious attacks.
Current Fingerprint Presentation Attack Detection methods often have poor robustness under new attack materials or sensor settings.
This paper proposes a novel Channel-wise Feature Denoising fingerprint PAD (CFD-PAD) method.
arXiv Detail & Related papers (2021-11-15T09:13:21Z) - Learnable Multi-level Frequency Decomposition and Hierarchical Attention
Mechanism for Generalized Face Presentation Attack Detection [7.324459578044212]
Face presentation attack detection (PAD) is attracting a lot of attention and playing a key role in securing face recognition systems.
We propose a dual-stream convolution neural networks (CNNs) framework to deal with unseen scenarios.
We successfully prove the design of our proposed PAD solution in a step-wise ablation study.
arXiv Detail & Related papers (2021-09-16T13:06:43Z) - Learning to Perform Downlink Channel Estimation in Massive MIMO Systems [72.76968022465469]
We study downlink (DL) channel estimation in a Massive multiple-input multiple-output (MIMO) system.
A common approach is to use the mean value as the estimate, motivated by channel hardening.
We propose two novel estimation methods.
arXiv Detail & Related papers (2021-09-06T13:42:32Z) - Towards Transferable Adversarial Attack against Deep Face Recognition [58.07786010689529]
Deep convolutional neural networks (DCNNs) have been found to be vulnerable to adversarial examples.
transferable adversarial examples can severely hinder the robustness of DCNNs.
We propose DFANet, a dropout-based method used in convolutional layers, which can increase the diversity of surrogate models.
We generate a new set of adversarial face pairs that can successfully attack four commercial APIs without any queries.
arXiv Detail & Related papers (2020-04-13T06:44:33Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.