Dual Reweighting Domain Generalization for Face Presentation Attack
Detection
- URL: http://arxiv.org/abs/2106.16128v1
- Date: Wed, 30 Jun 2021 15:24:34 GMT
- Title: Dual Reweighting Domain Generalization for Face Presentation Attack
Detection
- Authors: Shubao Liu, Ke-Yue Zhang, Taiping Yao, Kekai Sheng, Shouhong Ding,
Ying Tai, Jilin Li, Yuan Xie, Lizhuang Ma
- Abstract summary: Face anti-spoofing approaches based on domain generalization (DG) have drawn growing attention due to their robustness for unseen scenarios.
Previous methods treat each sample from multiple domains indiscriminately during the training process.
We propose a novel Dual Reweighting Domain Generalization framework which iteratively reweights the relative importance between samples to further improve the generalization.
- Score: 40.63170532438904
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face anti-spoofing approaches based on domain generalization (DG) have drawn
growing attention due to their robustness for unseen scenarios. Previous
methods treat each sample from multiple domains indiscriminately during the
training process, and endeavor to extract a common feature space to improve the
generalization. However, due to complex and biased data distribution, directly
treating them equally will corrupt the generalization ability. To settle the
issue, we propose a novel Dual Reweighting Domain Generalization (DRDG)
framework which iteratively reweights the relative importance between samples
to further improve the generalization. Concretely, Sample Reweighting Module is
first proposed to identify samples with relatively large domain bias, and
reduce their impact on the overall optimization. Afterwards, Feature
Reweighting Module is introduced to focus on these samples and extract more
domain-irrelevant features via a self-distilling mechanism. Combined with the
domain discriminator, the iteration of the two modules promotes the extraction
of generalized features. Extensive experiments and visualizations are presented
to demonstrate the effectiveness and interpretability of our method against the
state-of-the-art competitors.
Related papers
- How Does Distribution Matching Help Domain Generalization: An Information-theoretic Analysis [21.685468628033206]
We formulate domain generalization from a novel probabilistic perspective.
We provide key insights into the roles of gradient and representation matching in promoting generalization.
In light of these theoretical findings, we introduce IDM to simultaneously align the inter-domain gradients and representations.
arXiv Detail & Related papers (2024-06-14T06:28:17Z) - Randomized Adversarial Style Perturbations for Domain Generalization [49.888364462991234]
We propose a novel domain generalization technique, referred to as Randomized Adversarial Style Perturbation (RASP)
The proposed algorithm perturbs the style of a feature in an adversarial direction towards a randomly selected class, and makes the model learn against being misled by the unexpected styles observed in unseen target domains.
We evaluate the proposed algorithm via extensive experiments on various benchmarks and show that our approach improves domain generalization performance, especially in large-scale benchmarks.
arXiv Detail & Related papers (2023-04-04T17:07:06Z) - Improving Generalization with Domain Convex Game [32.07275105040802]
Domain generalization tends to alleviate the poor generalization capability of deep neural networks by learning model with multiple source domains.
A classical solution to DG is domain augmentation, the common belief of which is that diversifying source domains will be conducive to the out-of-distribution generalization.
Our explorations reveal that the correlation between model generalization and the diversity of domains may be not strictly positive, which limits the effectiveness of domain augmentation.
arXiv Detail & Related papers (2023-03-23T14:27:49Z) - Learning to Learn Domain-invariant Parameters for Domain Generalization [29.821634033299855]
Domain generalization (DG) aims to overcome this issue by capturing domain-invariant representations from source domains.
We propose two modules of Domain Decoupling and Combination (DDC) and Domain-invariance-guided Backpropagation (DIGB)
Our proposed method has achieved state-of-the-art performance with strong generalization capability.
arXiv Detail & Related papers (2022-11-04T07:19:34Z) - Back-to-Bones: Rediscovering the Role of Backbones in Domain
Generalization [1.6799377888527687]
Domain Generalization studies the capability of a deep learning model to generalize to out-of-training distributions.
Recent research has provided a reproducible benchmark for DG, pointing out the effectiveness of naive empirical risk minimization (ERM) over existing algorithms.
In this paper, we evaluate the backbones proposing a comprehensive analysis of their intrinsic generalization capabilities.
arXiv Detail & Related papers (2022-09-02T15:30:17Z) - Causal Balancing for Domain Generalization [95.97046583437145]
We propose a balanced mini-batch sampling strategy to reduce the domain-specific spurious correlations in observed training distributions.
We provide an identifiability guarantee of the source of spuriousness and show that our proposed approach provably samples from a balanced, spurious-free distribution.
arXiv Detail & Related papers (2022-06-10T17:59:11Z) - Learning Domain Invariant Representations for Generalizable Person
Re-Identification [71.35292121563491]
Generalizable person Re-Identification (ReID) has attracted growing attention in recent computer vision community.
We introduce causality into person ReID and propose a novel generalizable framework, named Domain Invariant Representations for generalizable person Re-Identification (DIR-ReID)
arXiv Detail & Related papers (2021-03-29T18:59:48Z) - Learning Invariant Representations and Risks for Semi-supervised Domain
Adaptation [109.73983088432364]
We propose the first method that aims to simultaneously learn invariant representations and risks under the setting of semi-supervised domain adaptation (Semi-DA)
We introduce the LIRR algorithm for jointly textbfLearning textbfInvariant textbfRepresentations and textbfRisks.
arXiv Detail & Related papers (2020-10-09T15:42:35Z) - Single-Side Domain Generalization for Face Anti-Spoofing [91.79161815884126]
We propose an end-to-end single-side domain generalization framework to improve the generalization ability of face anti-spoofing.
Our proposed approach is effective and outperforms the state-of-the-art methods on four public databases.
arXiv Detail & Related papers (2020-04-29T09:32:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.