Self-Domain Adaptation for Face Anti-Spoofing
- URL: http://arxiv.org/abs/2102.12129v1
- Date: Wed, 24 Feb 2021 08:46:39 GMT
- Title: Self-Domain Adaptation for Face Anti-Spoofing
- Authors: Jingjing Wang, Jingyi Zhang, Ying Bian, Youyi Cai, Chunmao Wang,
Shiliang Pu
- Abstract summary: We propose a self-domain adaptation framework to leverage the unlabeled test domain data at inference.
A meta-learning based adaptor learning algorithm is proposed using the data of multiple source domains at the training step.
- Score: 31.441928816043536
- License: http://creativecommons.org/licenses/by-nc-nd/4.0/
- Abstract: Although current face anti-spoofing methods achieve promising results under
intra-dataset testing, they suffer from poor generalization to unseen attacks.
Most existing works adopt domain adaptation (DA) or domain generalization (DG)
techniques to address this problem. However, the target domain is often unknown
during training which limits the utilization of DA methods. DG methods can
conquer this by learning domain invariant features without seeing any target
data. However, they fail in utilizing the information of target data. In this
paper, we propose a self-domain adaptation framework to leverage the unlabeled
test domain data at inference. Specifically, a domain adaptor is designed to
adapt the model for test domain. In order to learn a better adaptor, a
meta-learning based adaptor learning algorithm is proposed using the data of
multiple source domains at the training step. At test time, the adaptor is
updated using only the test domain data according to the proposed unsupervised
adaptor loss to further improve the performance. Extensive experiments on four
public datasets validate the effectiveness of the proposed method.
Related papers
- Better Practices for Domain Adaptation [62.70267990659201]
Domain adaptation (DA) aims to provide frameworks for adapting models to deployment data without using labels.
Unclear validation protocol for DA has led to bad practices in the literature.
We show challenges across all three branches of domain adaptation methodology.
arXiv Detail & Related papers (2023-09-07T17:44:18Z) - Deep Unsupervised Domain Adaptation: A Review of Recent Advances and
Perspectives [16.68091981866261]
Unsupervised domain adaptation (UDA) is proposed to counter the performance drop on data in a target domain.
UDA has yielded promising results on natural image processing, video analysis, natural language processing, time-series data analysis, medical image analysis, etc.
arXiv Detail & Related papers (2022-08-15T20:05:07Z) - Source-Free Domain Adaptation via Distribution Estimation [106.48277721860036]
Domain Adaptation aims to transfer the knowledge learned from a labeled source domain to an unlabeled target domain whose data distributions are different.
Recently, Source-Free Domain Adaptation (SFDA) has drawn much attention, which tries to tackle domain adaptation problem without using source data.
In this work, we propose a novel framework called SFDA-DE to address SFDA task via source Distribution Estimation.
arXiv Detail & Related papers (2022-04-24T12:22:19Z) - Connect, Not Collapse: Explaining Contrastive Learning for Unsupervised
Domain Adaptation [88.5448806952394]
We consider unsupervised domain adaptation (UDA), where labeled data from a source domain and unlabeled data from a target domain are used to learn a classifier for the target domain.
We show that contrastive pre-training, which learns features on unlabeled source and target data and then fine-tunes on labeled source data, is competitive with strong UDA methods.
arXiv Detail & Related papers (2022-04-01T16:56:26Z) - Instance Relation Graph Guided Source-Free Domain Adaptive Object
Detection [79.89082006155135]
Unsupervised Domain Adaptation (UDA) is an effective approach to tackle the issue of domain shift.
UDA methods try to align the source and target representations to improve the generalization on the target domain.
The Source-Free Adaptation Domain (SFDA) setting aims to alleviate these concerns by adapting a source-trained model for the target domain without requiring access to the source data.
arXiv Detail & Related papers (2022-03-29T17:50:43Z) - Stagewise Unsupervised Domain Adaptation with Adversarial Self-Training
for Road Segmentation of Remote Sensing Images [93.50240389540252]
Road segmentation from remote sensing images is a challenging task with wide ranges of application potentials.
We propose a novel stagewise domain adaptation model called RoadDA to address the domain shift (DS) issue in this field.
Experiment results on two benchmarks demonstrate that RoadDA can efficiently reduce the domain gap and outperforms state-of-the-art methods.
arXiv Detail & Related papers (2021-08-28T09:29:14Z) - ConDA: Continual Unsupervised Domain Adaptation [0.0]
Domain Adaptation (DA) techniques are important for overcoming the domain shift between the source domain used for training and the target domain where testing takes place.
Current DA methods assume that the entire target domain is available during adaptation, which may not hold in practice.
This paper considers a more realistic scenario, where target data become available in smaller batches and adaptation on the entire target domain is not feasible.
arXiv Detail & Related papers (2021-03-19T23:20:41Z) - Adversarial Unsupervised Domain Adaptation Guided with Deep Clustering
for Face Presentation Attack Detection [0.8701566919381223]
Face Presentation Attack Detection (PAD) has drawn increasing attentions to secure the face recognition systems.
We propose an end-to-end learning framework based on Domain Adaptation (DA) to improve PAD generalization capability.
arXiv Detail & Related papers (2021-02-13T05:34:40Z) - Test-time Unsupervised Domain Adaptation [3.4188171733930584]
Convolutional neural networks rarely generalise to different scanners or acquisition protocols (target domain)
We show that models adapted to a specific target subject from the target domain outperform a domain adaptation method which has seen more data of the target domain but not this specific target subject.
arXiv Detail & Related papers (2020-10-05T11:30:36Z) - Open-Set Hypothesis Transfer with Semantic Consistency [99.83813484934177]
We introduce a method that focuses on the semantic consistency under transformation of target data.
Our model first discovers confident predictions and performs classification with pseudo-labels.
As a result, unlabeled data can be classified into discriminative classes coincided with either source classes or unknown classes.
arXiv Detail & Related papers (2020-10-01T10:44:31Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.