NAS-FAS: Static-Dynamic Central Difference Network Search for Face
Anti-Spoofing
- URL: http://arxiv.org/abs/2011.02062v1
- Date: Tue, 3 Nov 2020 23:34:40 GMT
- Title: NAS-FAS: Static-Dynamic Central Difference Network Search for Face
Anti-Spoofing
- Authors: Zitong Yu, Jun Wan, Yunxiao Qin, Xiaobai Li, Stan Z. Li, Guoying Zhao
- Abstract summary: Face anti-spoofing (FAS) plays a vital role in securing face recognition systems.
Existing methods rely on expert-designed networks, which may lead to a sub-optimal solution for task FAS.
Here we propose the first FAS method based on neural search (NAS), called FAS-FAS, to discover the well-suited task-aware networks.
- Score: 94.89405915373857
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Face anti-spoofing (FAS) plays a vital role in securing face recognition
systems. Existing methods heavily rely on the expert-designed networks, which
may lead to a sub-optimal solution for FAS task. Here we propose the first FAS
method based on neural architecture search (NAS), called NAS-FAS, to discover
the well-suited task-aware networks. Unlike previous NAS works mainly focus on
developing efficient search strategies in generic object classification, we pay
more attention to study the search spaces for FAS task. The challenges of
utilizing NAS for FAS are in two folds: the networks searched on 1) a specific
acquisition condition might perform poorly in unseen conditions, and 2)
particular spoofing attacks might generalize badly for unseen attacks. To
overcome these two issues, we develop a novel search space consisting of
central difference convolution and pooling operators. Moreover, an efficient
static-dynamic representation is exploited for fully mining the FAS-aware
spatio-temporal discrepancy. Besides, we propose Domain/Type-aware Meta-NAS,
which leverages cross-domain/type knowledge for robust searching. Finally, in
order to evaluate the NAS transferability for cross datasets and unknown attack
types, we release a large-scale 3D mask dataset, namely CASIA-SURF 3DMask, for
supporting the new 'cross-dataset cross-type' testing protocol. Experiments
demonstrate that the proposed NAS-FAS achieves state-of-the-art performance on
nine FAS benchmark datasets with four testing protocols.
Related papers
- How Much Is Hidden in the NAS Benchmarks? Few-Shot Adaptation of a NAS
Predictor [22.87207410692821]
We borrow from the rich field of meta-learning for few-shot adaptation and study applicability of those methods to NAS.
Our meta-learning approach not only shows superior (or matching) performance in the cross-validation experiments but also successful extrapolation to a new search space and tasks.
arXiv Detail & Related papers (2023-11-30T10:51:46Z) - $\beta$-DARTS++: Bi-level Regularization for Proxy-robust Differentiable
Architecture Search [96.99525100285084]
Regularization method, Beta-Decay, is proposed to regularize the DARTS-based NAS searching process (i.e., $beta$-DARTS)
In-depth theoretical analyses on how it works and why it works are provided.
arXiv Detail & Related papers (2023-01-16T12:30:32Z) - $\beta$-DARTS: Beta-Decay Regularization for Differentiable Architecture
Search [85.84110365657455]
We propose a simple-but-efficient regularization method, termed as Beta-Decay, to regularize the DARTS-based NAS searching process.
Experimental results on NAS-Bench-201 show that our proposed method can help to stabilize the searching process and makes the searched network more transferable across different datasets.
arXiv Detail & Related papers (2022-03-03T11:47:14Z) - NAS-Bench-360: Benchmarking Diverse Tasks for Neural Architecture Search [18.9676056830197]
Most existing neural architecture search (NAS) benchmarks and algorithms prioritize performance on well-studied tasks.
We present NAS-Bench-360, a benchmark suite for evaluating state-of-the-art NAS methods for convolutional neural networks (CNNs)
arXiv Detail & Related papers (2021-10-12T01:13:18Z) - Understanding and Accelerating Neural Architecture Search with
Training-Free and Theory-Grounded Metrics [117.4281417428145]
This work targets designing a principled and unified training-free framework for Neural Architecture Search (NAS)
NAS has been explosively studied to automate the discovery of top-performer neural networks, but suffers from heavy resource consumption and often incurs search bias due to truncated training or approximations.
We present a unified framework to understand and accelerate NAS, by disentangling "TEG" characteristics of searched networks.
arXiv Detail & Related papers (2021-08-26T17:52:07Z) - TransNAS-Bench-101: Improving Transferability and Generalizability of
Cross-Task Neural Architecture Search [98.22779489340869]
We propose TransNAS-Bench-101, a benchmark dataset containing network performance across seven vision tasks.
We explore two fundamentally different types of search space: cell-level search space and macro-level search space.
With 7,352 backbones evaluated on seven tasks, 51,464 trained models with detailed training information are provided.
arXiv Detail & Related papers (2021-05-25T12:15:21Z) - Local Search is a Remarkably Strong Baseline for Neural Architecture
Search [0.0]
We consider, for the first time, a simple Local Search (LS) algorithm for Neural Architecture Search (NAS)
We release two benchmark datasets, named MacroNAS-C10 and MacroNAS-C100, containing 200K saved network evaluations for two established image classification tasks.
arXiv Detail & Related papers (2020-04-20T00:08:34Z) - Searching Central Difference Convolutional Networks for Face
Anti-Spoofing [68.77468465774267]
Face anti-spoofing (FAS) plays a vital role in face recognition systems.
Most state-of-the-art FAS methods rely on stacked convolutions and expert-designed network.
Here we propose a novel frame level FAS method based on Central Difference Convolution (CDC)
arXiv Detail & Related papers (2020-03-09T12:48:37Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.