WebFace260M: A Benchmark Unveiling the Power of Million-Scale Deep Face
Recognition
- URL: http://arxiv.org/abs/2103.04098v1
- Date: Sat, 6 Mar 2021 11:12:43 GMT
- Title: WebFace260M: A Benchmark Unveiling the Power of Million-Scale Deep Face
Recognition
- Authors: Zheng Zhu, Guan Huang, Jiankang Deng, Yun Ye, Junjie Huang, Xinze
Chen, Jiagang Zhu, Tian Yang, Jiwen Lu, Dalong Du, Jie Zhou
- Abstract summary: We contribute a new million-scale face benchmark containing noisy 4M identities/260M faces (WebFace260M) and cleaned 2M identities/42M faces (WebFace42M)
We reduce relative 40% failure rate on the challenging IJB-C set, and ranks the 3rd among 430 entries on NIST-FRVT.
Even 10% data (WebFace4M) shows superior performance compared with public training set.
- Score: 79.65728162193584
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: In this paper, we contribute a new million-scale face benchmark containing
noisy 4M identities/260M faces (WebFace260M) and cleaned 2M identities/42M
faces (WebFace42M) training data, as well as an elaborately designed
time-constrained evaluation protocol. Firstly, we collect 4M name list and
download 260M faces from the Internet. Then, a Cleaning Automatically utilizing
Self-Training (CAST) pipeline is devised to purify the tremendous WebFace260M,
which is efficient and scalable. To the best of our knowledge, the cleaned
WebFace42M is the largest public face recognition training set and we expect to
close the data gap between academia and industry. Referring to practical
scenarios, Face Recognition Under Inference Time conStraint (FRUITS) protocol
and a test set are constructed to comprehensively evaluate face matchers.
Equipped with this benchmark, we delve into million-scale face recognition
problems. A distributed framework is developed to train face recognition models
efficiently without tampering with the performance. Empowered by WebFace42M, we
reduce relative 40% failure rate on the challenging IJB-C set, and ranks the
3rd among 430 entries on NIST-FRVT. Even 10% data (WebFace4M) shows superior
performance compared with public training set. Furthermore, comprehensive
baselines are established on our rich-attribute test set under
FRUITS-100ms/500ms/1000ms protocol, including MobileNet, EfficientNet,
AttentionNet, ResNet, SENet, ResNeXt and RegNet families. Benchmark website is
https://www.face-benchmark.org.
Related papers
- Toward High Quality Facial Representation Learning [58.873356953627614]
We propose a self-supervised pre-training framework, called Mask Contrastive Face (MCF)
We use feature map of a pre-trained visual backbone as a supervision item and use a partially pre-trained decoder for mask image modeling.
Our model achieves 0.932 NME_diag$ for AFLW-19 face alignment and 93.96 F1 score for LaPa face parsing.
arXiv Detail & Related papers (2023-09-07T09:11:49Z) - SqueezerFaceNet: Reducing a Small Face Recognition CNN Even More Via
Filter Pruning [55.84746218227712]
We develop SqueezerFaceNet, a light face recognition network which less than 1M parameters.
We show that it can be further reduced (up to 40%) without an appreciable loss in performance.
arXiv Detail & Related papers (2023-07-20T08:38:50Z) - WebFace260M: A Benchmark for Million-Scale Deep Face Recognition [89.39080252029386]
We contribute a new million-scale recognition benchmark, containing uncurated 4M identities/260M faces (WebFace260M) and cleaned 2M identities/42M faces (WebFace42M)
A distributed framework is developed to train face recognition models efficiently without tampering with the performance.
The proposed benchmark shows enormous potential on standard, masked and unbiased face recognition scenarios.
arXiv Detail & Related papers (2022-04-21T14:56:53Z) - Face-NMS: A Core-set Selection Approach for Efficient Face Recognition [14.863570260332747]
Face recognition in the wild has achieved remarkable success.
One key engine is the increasing size of training data.
Massive number of faces raise the constraints in training time, computing resources, and memory cost.
arXiv Detail & Related papers (2021-09-10T07:07:04Z) - Masked Face Recognition Challenge: The InsightFace Track Report [79.77020394722788]
During the COVID-19 coronavirus epidemic, almost everyone wears a facial mask, which poses a huge challenge to deep face recognition.
In this workshop, we focus on bench-marking deep face recognition methods under the existence of facial masks.
arXiv Detail & Related papers (2021-08-18T15:14:44Z) - Masked Face Recognition Challenge: The WebFace260M Track Report [81.57455766506197]
Face Bio-metrics under COVID Workshop and Masked Face Recognition Challenge in ICCV 2021.
WebFace260M Track aims to push the frontiers of practical MFR.
In the first phase of WebFace260M Track, 69 teams (total 833 solutions) participate in the challenge.
There are second phase of the challenge till October 1, 2021 and on-going leaderboard.
arXiv Detail & Related papers (2021-08-16T15:51:51Z) - MixFaceNets: Extremely Efficient Face Recognition Networks [6.704751710867745]
We present a set of extremely efficient and high throughput models for accurate face verification, MixFaceNets.
Experiment evaluations on Label Face in the Wild (LFW), Age-DB, MegaFace, and IARPA Janus Benchmarks IJB-B and IJB-C datasets have shown the effectiveness of our MixFaceNets.
With computational complexity between 500M and 1G FLOPs, our MixFaceNets achieved results comparable to the top-ranked models.
arXiv Detail & Related papers (2021-07-27T19:10:27Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.