Robust Face Anti-Spoofing with Dual Probabilistic Modeling
- URL: http://arxiv.org/abs/2204.12685v1
- Date: Wed, 27 Apr 2022 03:44:18 GMT
- Title: Robust Face Anti-Spoofing with Dual Probabilistic Modeling
- Authors: Yuanhan Zhang, Yichao Wu, Zhenfei Yin, Jing Shao, Ziwei Liu
- Abstract summary: We propose a unified framework called Dual Probabilistic Modeling (DPM), with two dedicated modules, DPM-LQ (Label Quality aware learning) and DPM-DQ (Data Quality aware learning)
DPM-LQ is able to produce robust feature representations without overfitting to the distribution of noisy semantic labels.
DPM-DQ can eliminate data noise from False Reject' and False Accept' during inference by correcting the prediction confidence of noisy data based on its quality distribution.
- Score: 49.14353429234298
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: The field of face anti-spoofing (FAS) has witnessed great progress with the
surge of deep learning. Due to its data-driven nature, existing FAS methods are
sensitive to the noise in the dataset, which will hurdle the learning process.
However, very few works consider noise modeling in FAS. In this work, we
attempt to fill this gap by automatically addressing the noise problem from
both label and data perspectives in a probabilistic manner. Specifically, we
propose a unified framework called Dual Probabilistic Modeling (DPM), with two
dedicated modules, DPM-LQ (Label Quality aware learning) and DPM-DQ (Data
Quality aware learning). Both modules are designed based on the assumption that
data and label should form coherent probabilistic distributions. DPM-LQ is able
to produce robust feature representations without overfitting to the
distribution of noisy semantic labels. DPM-DQ can eliminate data noise from
`False Reject' and `False Accept' during inference by correcting the prediction
confidence of noisy data based on its quality distribution. Both modules can be
incorporated into existing deep networks seamlessly and efficiently.
Furthermore, we propose the generalized DPM to address the noise problem in
practical usage without the need of semantic annotations. Extensive experiments
demonstrate that this probabilistic modeling can 1) significantly improve the
accuracy, and 2) make the model robust to the noise in real-world datasets.
Without bells and whistles, our proposed DPM achieves state-of-the-art
performance on multiple standard FAS benchmarks.
Related papers
- Meta-DiffuB: A Contextualized Sequence-to-Sequence Text Diffusion Model with Meta-Exploration [53.63593099509471]
We propose a scheduler-exploiter S2S-Diffusion paradigm designed to overcome the limitations of existing S2S-Diffusion models.
We employ Meta-Exploration to train an additional scheduler model dedicated to scheduling contextualized noise for each sentence.
Our exploiter model, an S2S-Diffusion model, leverages the noise scheduled by our scheduler model for updating and generation.
arXiv Detail & Related papers (2024-10-17T04:06:02Z) - Towards a Theoretical Understanding of Memorization in Diffusion Models [76.85077961718875]
Diffusion probabilistic models (DPMs) are being employed as mainstream models for Generative Artificial Intelligence (GenAI)
We provide a theoretical understanding of memorization in both conditional and unconditional DPMs under the assumption of model convergence.
We propose a novel data extraction method named textbfSurrogate condItional Data Extraction (SIDE) that leverages a time-dependent classifier trained on the generated data as a surrogate condition to extract training data from unconditional DPMs.
arXiv Detail & Related papers (2024-10-03T13:17:06Z) - Prototype based Masked Audio Model for Self-Supervised Learning of Sound Event Detection [22.892382672888488]
Semi-supervised algorithms rely on labeled data to learn from unlabeled data.
We introduce the Prototype based Masked Audio Model(PMAM) algorithm for self-supervised representation learning in SED.
arXiv Detail & Related papers (2024-09-26T09:07:20Z) - DiffImpute: Tabular Data Imputation With Denoising Diffusion Probabilistic Model [9.908561639396273]
We propose DiffImpute, a novel Denoising Diffusion Probabilistic Model (DDPM)
It produces credible imputations for missing entries without undermining the authenticity of the existing data.
It can be applied to various settings of Missing Completely At Random (MCAR) and Missing At Random (MAR)
arXiv Detail & Related papers (2024-03-20T08:45:31Z) - Contractive Diffusion Probabilistic Models [5.217870815854702]
Diffusion probabilistic models (DPMs) have emerged as a promising technique in generative modeling.
We propose a new criterion -- the contraction property of backward sampling in the design of DPMs, leading to a novel class of contractive DPMs (CDPMs)
We show that CDPM can leverage weights of pretrained DPMs by a simple transformation, and does not need retraining.
arXiv Detail & Related papers (2024-01-23T21:51:51Z) - ReSup: Reliable Label Noise Suppression for Facial Expression
Recognition [20.74719263734951]
We propose a more reliable noise-label suppression method called ReSup.
To achieve optimal distribution modeling, ReSup models the similarity distribution of all samples.
To further enhance the reliability of our noise decision results, ReSup uses two networks to jointly achieve noise suppression.
arXiv Detail & Related papers (2023-05-29T06:02:06Z) - On Calibrating Diffusion Probabilistic Models [78.75538484265292]
diffusion probabilistic models (DPMs) have achieved promising results in diverse generative tasks.
We propose a simple way for calibrating an arbitrary pretrained DPM, with which the score matching loss can be reduced and the lower bounds of model likelihood can be increased.
Our calibration method is performed only once and the resulting models can be used repeatedly for sampling.
arXiv Detail & Related papers (2023-02-21T14:14:40Z) - Confidence-based Reliable Learning under Dual Noises [46.45663546457154]
Deep neural networks (DNNs) have achieved remarkable success in a variety of computer vision tasks.
Yet, the data collected from the open world are unavoidably polluted by noise, which may significantly undermine the efficacy of the learned models.
Various attempts have been made to reliably train DNNs under data noise, but they separately account for either the noise existing in the labels or that existing in the images.
This work provides a first, unified framework for reliable learning under the joint (image, label)-noise.
arXiv Detail & Related papers (2023-02-10T07:50:34Z) - Noise-Resistant Deep Metric Learning with Probabilistic Instance
Filtering [59.286567680389766]
Noisy labels are commonly found in real-world data, which cause performance degradation of deep neural networks.
We propose Probabilistic Ranking-based Instance Selection with Memory (PRISM) approach for DML.
PRISM calculates the probability of a label being clean, and filters out potentially noisy samples.
arXiv Detail & Related papers (2021-08-03T12:15:25Z) - Tackling Instance-Dependent Label Noise via a Universal Probabilistic
Model [80.91927573604438]
This paper proposes a simple yet universal probabilistic model, which explicitly relates noisy labels to their instances.
Experiments on datasets with both synthetic and real-world label noise verify that the proposed method yields significant improvements on robustness.
arXiv Detail & Related papers (2021-01-14T05:43:51Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.