Face-NMS: A Core-set Selection Approach for Efficient Face Recognition
- URL: http://arxiv.org/abs/2109.04698v1
- Date: Fri, 10 Sep 2021 07:07:04 GMT
- Title: Face-NMS: A Core-set Selection Approach for Efficient Face Recognition
- Authors: Yunze Chen, Junjie Huang, Jiagang Zhu, Zheng Zhu, Tian Yang, Guan
Huang, and Dalong Du
- Abstract summary: Face recognition in the wild has achieved remarkable success.
One key engine is the increasing size of training data.
Massive number of faces raise the constraints in training time, computing resources, and memory cost.
- Score: 14.863570260332747
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, face recognition in the wild has achieved remarkable success and
one key engine is the increasing size of training data. For example, the
largest face dataset, WebFace42M contains about 2 million identities and 42
million faces. However, a massive number of faces raise the constraints in
training time, computing resources, and memory cost. The current research on
this problem mainly focuses on designing an efficient Fully-connected layer
(FC) to reduce GPU memory consumption caused by a large number of identities.
In this work, we relax these constraints by resolving the redundancy problem of
the up-to-date face datasets caused by the greedily collecting operation (i.e.
the core-set selection perspective). As the first attempt in this perspective
on the face recognition problem, we find that existing methods are limited in
both performance and efficiency. For superior cost-efficiency, we contribute a
novel filtering strategy dubbed Face-NMS. Face-NMS works on feature space and
simultaneously considers the local and global sparsity in generating core sets.
In practice, Face-NMS is analogous to Non-Maximum Suppression (NMS) in the
object detection community. It ranks the faces by their potential contribution
to the overall sparsity and filters out the superfluous face in the pairs with
high similarity for local sparsity. With respect to the efficiency aspect,
Face-NMS accelerates the whole pipeline by applying a smaller but sufficient
proxy dataset in training the proxy model. As a result, with Face-NMS, we
successfully scale down the WebFace42M dataset to 60% while retaining its
performance on the main benchmarks, offering a 40% resource-saving and 1.64
times acceleration. The code is publicly available for reference at
https://github.com/HuangJunJie2017/Face-NMS.
Related papers
- Toward High Quality Facial Representation Learning [58.873356953627614]
We propose a self-supervised pre-training framework, called Mask Contrastive Face (MCF)
We use feature map of a pre-trained visual backbone as a supervision item and use a partially pre-trained decoder for mask image modeling.
Our model achieves 0.932 NME_diag$ for AFLW-19 face alignment and 93.96 F1 score for LaPa face parsing.
arXiv Detail & Related papers (2023-09-07T09:11:49Z) - Blind Face Restoration: Benchmark Datasets and a Baseline Model [63.053331687284064]
Blind Face Restoration (BFR) aims to construct a high-quality (HQ) face image from its corresponding low-quality (LQ) input.
We first synthesize two blind face restoration benchmark datasets called EDFace-Celeb-1M (BFR128) and EDFace-Celeb-150K (BFR512)
State-of-the-art methods are benchmarked on them under five settings including blur, noise, low resolution, JPEG compression artifacts, and the combination of them (full degradation)
arXiv Detail & Related papers (2022-06-08T06:34:24Z) - SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition [74.13631562652836]
We propose a novel loss function, named sigmoid-constrained hypersphere loss (SFace)
SFace imposes intra-class and inter-class constraints on a hypersphere manifold, which are controlled by two sigmoid gradient re-scale functions respectively.
It can make a better balance between decreasing the intra-class distances and preventing overfitting to the label noise, and contributes more robust deep face recognition models.
arXiv Detail & Related papers (2022-05-24T11:54:15Z) - FaceMAE: Privacy-Preserving Face Recognition via Masked Autoencoders [81.21440457805932]
We propose a novel framework FaceMAE, where the face privacy and recognition performance are considered simultaneously.
randomly masked face images are used to train the reconstruction module in FaceMAE.
We also perform sufficient privacy-preserving face recognition on several public face datasets.
arXiv Detail & Related papers (2022-05-23T07:19:42Z) - WebFace260M: A Benchmark for Million-Scale Deep Face Recognition [89.39080252029386]
We contribute a new million-scale recognition benchmark, containing uncurated 4M identities/260M faces (WebFace260M) and cleaned 2M identities/42M faces (WebFace42M)
A distributed framework is developed to train face recognition models efficiently without tampering with the performance.
The proposed benchmark shows enormous potential on standard, masked and unbiased face recognition scenarios.
arXiv Detail & Related papers (2022-04-21T14:56:53Z) - FaceOcc: A Diverse, High-quality Face Occlusion Dataset for Human Face
Extraction [3.8502825594372703]
Occlusions often occur in face images in the wild, troubling face-related tasks such as landmark detection, 3D reconstruction, and face recognition.
This paper proposes a novel face segmentation dataset with manually labeled face occlusions from the CelebA-HQ and the internet.
We trained a straightforward face segmentation model but obtained SOTA performance, convincingly demonstrating the effectiveness of the proposed dataset.
arXiv Detail & Related papers (2022-01-20T19:44:18Z) - Efficient Masked Face Recognition Method during the COVID-19 Pandemic [4.13365552362244]
coronavirus disease (COVID-19) is an unparalleled crisis leading to a huge number of casualties and security problems.
In order to reduce the spread of coronavirus, people often wear masks to protect themselves.
This makes face recognition a very difficult task since certain parts of the face are hidden.
arXiv Detail & Related papers (2021-05-07T01:32:37Z) - WebFace260M: A Benchmark Unveiling the Power of Million-Scale Deep Face
Recognition [79.65728162193584]
We contribute a new million-scale face benchmark containing noisy 4M identities/260M faces (WebFace260M) and cleaned 2M identities/42M faces (WebFace42M)
We reduce relative 40% failure rate on the challenging IJB-C set, and ranks the 3rd among 430 entries on NIST-FRVT.
Even 10% data (WebFace4M) shows superior performance compared with public training set.
arXiv Detail & Related papers (2021-03-06T11:12:43Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.