UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face
Recognition
- URL: http://arxiv.org/abs/2311.02523v1
- Date: Sat, 4 Nov 2023 23:00:40 GMT
- Title: UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face
Recognition
- Authors: Qiufu Li, Xi Jia, Jiancan Zhou, Linlin Shen and Jinming Duan
- Abstract summary: We propose a unified threshold integrated sample-to-sample based loss (USS loss)
USS loss features an explicit unified threshold for distinguishing positive from negative pairs.
We also derive the sample-to-sample based softmax and BCE losses, and discuss their relationship.
- Score: 35.66000285310775
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Sample-to-class-based face recognition models can not fully explore the
cross-sample relationship among large amounts of facial images, while
sample-to-sample-based models require sophisticated pairing processes for
training. Furthermore, neither method satisfies the requirements of real-world
face verification applications, which expect a unified threshold separating
positive from negative facial pairs. In this paper, we propose a unified
threshold integrated sample-to-sample based loss (USS loss), which features an
explicit unified threshold for distinguishing positive from negative pairs.
Inspired by our USS loss, we also derive the sample-to-sample based softmax and
BCE losses, and discuss their relationship. Extensive evaluation on multiple
benchmark datasets, including MFR, IJB-C, LFW, CFP-FP, AgeDB, and MegaFace,
demonstrates that the proposed USS loss is highly efficient and can work
seamlessly with sample-to-class-based losses. The embedded loss (USS and
sample-to-class Softmax loss) overcomes the pitfalls of previous approaches and
the trained facial model UniTSFace exhibits exceptional performance,
outperforming state-of-the-art methods, such as CosFace, ArcFace, VPL,
AnchorFace, and UNPG. Our code is available.
Related papers
- Fine Structure-Aware Sampling: A New Sampling Training Scheme for
Pixel-Aligned Implicit Models in Single-View Human Reconstruction [105.46091601932524]
We introduce Fine Structured-Aware Sampling (FSS) to train pixel-aligned implicit models for single-view human reconstruction.
FSS proactively adapts to the thickness and complexity of surfaces.
It also proposes a mesh thickness loss signal for pixel-aligned implicit models.
arXiv Detail & Related papers (2024-02-29T14:26:46Z) - Deep Boosting Multi-Modal Ensemble Face Recognition with Sample-Level
Weighting [11.39204323420108]
Deep convolutional neural networks have achieved remarkable success in face recognition.
The current training benchmarks exhibit an imbalanced quality distribution.
This poses issues for generalization on hard samples since they are underrepresented during training.
Inspired by the well-known AdaBoost, we propose a sample-level weighting approach to incorporate the importance of different samples into the FR loss.
arXiv Detail & Related papers (2023-08-18T01:44:54Z) - Blind Face Restoration: Benchmark Datasets and a Baseline Model [63.053331687284064]
Blind Face Restoration (BFR) aims to construct a high-quality (HQ) face image from its corresponding low-quality (LQ) input.
We first synthesize two blind face restoration benchmark datasets called EDFace-Celeb-1M (BFR128) and EDFace-Celeb-150K (BFR512)
State-of-the-art methods are benchmarked on them under five settings including blur, noise, low resolution, JPEG compression artifacts, and the combination of them (full degradation)
arXiv Detail & Related papers (2022-06-08T06:34:24Z) - SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition [74.13631562652836]
We propose a novel loss function, named sigmoid-constrained hypersphere loss (SFace)
SFace imposes intra-class and inter-class constraints on a hypersphere manifold, which are controlled by two sigmoid gradient re-scale functions respectively.
It can make a better balance between decreasing the intra-class distances and preventing overfitting to the label noise, and contributes more robust deep face recognition models.
arXiv Detail & Related papers (2022-05-24T11:54:15Z) - AdaFace: Quality Adaptive Margin for Face Recognition [56.99208144386127]
We introduce another aspect of adaptiveness in the loss function, namely the image quality.
We propose a new loss function that emphasizes samples of different difficulties based on their image quality.
Our method, AdaFace, improves the face recognition performance over the state-of-the-art (SoTA) on four datasets.
arXiv Detail & Related papers (2022-04-03T01:23:41Z) - MixFace: Improving Face Verification Focusing on Fine-grained Conditions [2.078506623954885]
We propose a novel loss function, MixFace, that combines classification and metric losses.
The superiority of MixFace in terms of effectiveness and robustness is demonstrated experimentally on various benchmark datasets.
arXiv Detail & Related papers (2021-11-02T16:34:54Z) - Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer [51.926868759681014]
We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
arXiv Detail & Related papers (2020-08-20T09:49:13Z) - BioMetricNet: deep unconstrained face verification through learning of
metrics regularized onto Gaussian distributions [25.00475462213752]
We present BioMetricNet, a novel framework for deep unconstrained face verification.
The proposed approach does not impose any specific metric on facial features.
It shapes the decision space by learning a latent representation in which matching and non-matching pairs are mapped onto clearly separated and well-behaved target distributions.
arXiv Detail & Related papers (2020-08-13T17:22:46Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.