MixFace: Improving Face Verification Focusing on Fine-grained Conditions
- URL: http://arxiv.org/abs/2111.01717v1
- Date: Tue, 2 Nov 2021 16:34:54 GMT
- Title: MixFace: Improving Face Verification Focusing on Fine-grained Conditions
- Authors: Junuk Jung, Sungbin Son, Joochan Park, Yongjun Park, Seonhoon Lee,
Heung-Seon Oh
- Abstract summary: We propose a novel loss function, MixFace, that combines classification and metric losses.
The superiority of MixFace in terms of effectiveness and robustness is demonstrated experimentally on various benchmark datasets.
- Score: 2.078506623954885
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The performance of face recognition has become saturated for public benchmark
datasets such as LFW, CFP-FP, and AgeDB, owing to the rapid advances in CNNs.
However, the effects of faces with various fine-grained conditions on FR models
have not been investigated because of the absence of such datasets. This paper
analyzes their effects in terms of different conditions and loss functions
using K-FACE, a recently introduced FR dataset with fine-grained conditions. We
propose a novel loss function, MixFace, that combines classification and metric
losses. The superiority of MixFace in terms of effectiveness and robustness is
demonstrated experimentally on various benchmark datasets.
Related papers
- UniTSFace: Unified Threshold Integrated Sample-to-Sample Loss for Face
Recognition [35.66000285310775]
We propose a unified threshold integrated sample-to-sample based loss (USS loss)
USS loss features an explicit unified threshold for distinguishing positive from negative pairs.
We also derive the sample-to-sample based softmax and BCE losses, and discuss their relationship.
arXiv Detail & Related papers (2023-11-04T23:00:40Z) - DifFIQA: Face Image Quality Assessment Using Denoising Diffusion
Probabilistic Models [1.217503190366097]
Face image quality assessment (FIQA) techniques aim to mitigate these performance degradations.
We present a powerful new FIQA approach, named DifFIQA, which relies on denoising diffusion probabilistic models (DDPM)
Because the diffusion-based perturbations are computationally expensive, we also distill the knowledge encoded in DifFIQA into a regression-based quality predictor, called DifFIQA(R)
arXiv Detail & Related papers (2023-05-09T21:03:13Z) - Personalized Federated Learning under Mixture of Distributions [98.25444470990107]
We propose a novel approach to Personalized Federated Learning (PFL), which utilizes Gaussian mixture models (GMM) to fit the input data distributions across diverse clients.
FedGMM possesses an additional advantage of adapting to new clients with minimal overhead, and it also enables uncertainty quantification.
Empirical evaluations on synthetic and benchmark datasets demonstrate the superior performance of our method in both PFL classification and novel sample detection.
arXiv Detail & Related papers (2023-05-01T20:04:46Z) - Chasing Fairness Under Distribution Shift: A Model Weight Perturbation
Approach [72.19525160912943]
We first theoretically demonstrate the inherent connection between distribution shift, data perturbation, and model weight perturbation.
We then analyze the sufficient conditions to guarantee fairness for the target dataset.
Motivated by these sufficient conditions, we propose robust fairness regularization (RFR)
arXiv Detail & Related papers (2023-03-06T17:19:23Z) - Blind Face Restoration: Benchmark Datasets and a Baseline Model [63.053331687284064]
Blind Face Restoration (BFR) aims to construct a high-quality (HQ) face image from its corresponding low-quality (LQ) input.
We first synthesize two blind face restoration benchmark datasets called EDFace-Celeb-1M (BFR128) and EDFace-Celeb-150K (BFR512)
State-of-the-art methods are benchmarked on them under five settings including blur, noise, low resolution, JPEG compression artifacts, and the combination of them (full degradation)
arXiv Detail & Related papers (2022-06-08T06:34:24Z) - AdaFace: Quality Adaptive Margin for Face Recognition [56.99208144386127]
We introduce another aspect of adaptiveness in the loss function, namely the image quality.
We propose a new loss function that emphasizes samples of different difficulties based on their image quality.
Our method, AdaFace, improves the face recognition performance over the state-of-the-art (SoTA) on four datasets.
arXiv Detail & Related papers (2022-04-03T01:23:41Z) - Escaping Data Scarcity for High-Resolution Heterogeneous Face
Hallucination [68.78903256687697]
In Heterogeneous Face Recognition (HFR), the objective is to match faces across two different domains such as visible and thermal.
Recent methods attempting to fill the gap via synthesis have achieved promising results, but their performance is still limited by the scarcity of paired training data.
In this paper, we propose a new face hallucination paradigm for HFR, which not only enables data-efficient synthesis but also allows to scale up model training without breaking any privacy policy.
arXiv Detail & Related papers (2022-03-30T20:44:33Z) - On Recognizing Occluded Faces in the Wild [10.420394952839242]
We present the Real World Occluded Faces dataset.
This dataset contains faces with both upper face.
occluded, due to sunglasses, and lower face.
occluded, due to masks.
It is observed that the performance drop is far less when the models are tested on synthetically generated occluded faces.
arXiv Detail & Related papers (2021-09-08T14:20:10Z) - Semantic Neighborhood-Aware Deep Facial Expression Recognition [14.219890078312536]
A novel method is proposed to formulate semantic perturbation and select unreliable samples during training.
Experiments show the effectiveness of the proposed method and state-of-the-art results are reported.
arXiv Detail & Related papers (2020-04-27T11:48:17Z) - Suppressing Uncertainties for Large-Scale Facial Expression Recognition [81.51495681011404]
This paper proposes a simple yet efficient Self-Cure Network (SCN) which suppresses the uncertainties efficiently and prevents deep networks from over-fitting uncertain facial images.
Results on public benchmarks demonstrate that our SCN outperforms current state-of-the-art methods with textbf88.14% on RAF-DB, textbf60.23% on AffectNet, and textbf89.35% on FERPlus.
arXiv Detail & Related papers (2020-02-24T17:24:36Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.