SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition
- URL: http://arxiv.org/abs/2205.12010v1
- Date: Tue, 24 May 2022 11:54:15 GMT
- Title: SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition
- Authors: Yaoyao Zhong, Weihong Deng, Jiani Hu, Dongyue Zhao, Xian Li, Dongchao
Wen
- Abstract summary: We propose a novel loss function, named sigmoid-constrained hypersphere loss (SFace)
SFace imposes intra-class and inter-class constraints on a hypersphere manifold, which are controlled by two sigmoid gradient re-scale functions respectively.
It can make a better balance between decreasing the intra-class distances and preventing overfitting to the label noise, and contributes more robust deep face recognition models.
- Score: 74.13631562652836
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep face recognition has achieved great success due to large-scale training
databases and rapidly developing loss functions. The existing algorithms devote
to realizing an ideal idea: minimizing the intra-class distance and maximizing
the inter-class distance. However, they may neglect that there are also low
quality training images which should not be optimized in this strict way.
Considering the imperfection of training databases, we propose that intra-class
and inter-class objectives can be optimized in a moderate way to mitigate
overfitting problem, and further propose a novel loss function, named
sigmoid-constrained hypersphere loss (SFace). Specifically, SFace imposes
intra-class and inter-class constraints on a hypersphere manifold, which are
controlled by two sigmoid gradient re-scale functions respectively. The sigmoid
curves precisely re-scale the intra-class and inter-class gradients so that
training samples can be optimized to some degree. Therefore, SFace can make a
better balance between decreasing the intra-class distances for clean examples
and preventing overfitting to the label noise, and contributes more robust deep
face recognition models. Extensive experiments of models trained on
CASIA-WebFace, VGGFace2, and MS-Celeb-1M databases, and evaluated on several
face recognition benchmarks, such as LFW, MegaFace and IJB-C databases, have
demonstrated the superiority of SFace.
Related papers
- UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face
Recognition [35.66000285310775]
We propose a unified threshold integrated sample-to-sample based loss (USS loss)
USS loss features an explicit unified threshold for distinguishing positive from negative pairs.
We also derive the sample-to-sample based softmax and BCE losses, and discuss their relationship.
arXiv Detail & Related papers (2023-11-04T23:00:40Z) - SubFace: Learning with Softmax Approximation for Face Recognition [3.262192371833866]
SubFace is a softmax approximation method that employs the subspace feature to promote the performance of face recognition.
Comprehensive experiments conducted on benchmark datasets demonstrate that our method can significantly improve the performance of vanilla CNN baseline.
arXiv Detail & Related papers (2022-08-24T12:31:08Z) - Blind Face Restoration: Benchmark Datasets and a Baseline Model [63.053331687284064]
Blind Face Restoration (BFR) aims to construct a high-quality (HQ) face image from its corresponding low-quality (LQ) input.
We first synthesize two blind face restoration benchmark datasets called EDFace-Celeb-1M (BFR128) and EDFace-Celeb-150K (BFR512)
State-of-the-art methods are benchmarked on them under five settings including blur, noise, low resolution, JPEG compression artifacts, and the combination of them (full degradation)
arXiv Detail & Related papers (2022-06-08T06:34:24Z) - AdaFace: Quality Adaptive Margin for Face Recognition [56.99208144386127]
We introduce another aspect of adaptiveness in the loss function, namely the image quality.
We propose a new loss function that emphasizes samples of different difficulties based on their image quality.
Our method, AdaFace, improves the face recognition performance over the state-of-the-art (SoTA) on four datasets.
arXiv Detail & Related papers (2022-04-03T01:23:41Z) - KappaFace: Adaptive Additive Angular Margin Loss for Deep Face
Recognition [22.553018305072925]
We introduce a novel adaptive strategy, called KappaFace, to modulate the relative importance based on class difficultness and imbalance.
Experiments conducted on popular facial benchmarks demonstrate that our proposed method achieves superior performance to the state-of-the-art.
arXiv Detail & Related papers (2022-01-19T03:05:24Z) - SphereFace Revived: Unifying Hyperspherical Face Recognition [57.07058009281208]
We introduce a unified framework to understand large angular margin in hyperspherical face recognition.
Under this framework, we propose an improved variant with substantially better training stability -- SphereFace-R.
We show that SphereFace-R is consistently better than or competitive with state-of-the-art methods.
arXiv Detail & Related papers (2021-09-12T17:07:54Z) - Multi-Agent Semi-Siamese Training for Long-tail and Shallow Face
Learning [54.13876727413492]
In many real-world scenarios of face recognition, the depth of training dataset is shallow, which means only two face images are available for each ID.
With the non-uniform increase of samples, such issue is converted to a more general case, a.k.a a long-tail face learning.
Based on the Semi-Siamese Training (SST), we introduce an advanced solution, named Multi-Agent Semi-Siamese Training (MASST)
MASST includes a probe network and multiple gallery agents, the former aims to encode the probe features, and the latter constitutes a stack of
arXiv Detail & Related papers (2021-05-10T04:57:32Z) - MultiFace: A Generic Training Mechanism for Boosting Face Recognition
Performance [26.207302802393684]
We propose a simple yet efficient training mechanism called MultiFace.
It approximates the original high-dimensional features by the ensemble of low-dimensional features.
It brings the benefits of good interpretability to FR models via the clustering effect.
arXiv Detail & Related papers (2021-01-25T05:18:51Z) - Semi-Siamese Training for Shallow Face Learning [78.7386209619276]
We introduce a novel training method named Semi-Siamese Training (SST)
A pair of Semi-Siamese networks constitute the forward propagation structure, and the training loss is computed with an updating gallery queue.
Our method is developed without extra-dependency, thus can be flexibly integrated with the existing loss functions and network architectures.
arXiv Detail & Related papers (2020-07-16T15:20:04Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.