X2-Softmax: Margin Adaptive Loss Function for Face Recognition
- URL: http://arxiv.org/abs/2312.05281v2
- Date: Tue, 19 Dec 2023 12:20:22 GMT
- Title: X2-Softmax: Margin Adaptive Loss Function for Face Recognition
- Authors: Jiamu Xu, Xiaoxiang Liu, Xinyuan Zhang, Yain-Whar Si, Xiaofan Li,
Zheng Shi, Ke Wang, Xueyuan Gong
- Abstract summary: We propose a new angular margin loss named X2-Softmax.
X2-Softmax loss has adaptive angular margins, which provide the margin that increases with the angle between different classes growing.
We have trained the neural network with X2-Softmax loss on the MS1Mv3 dataset.
- Score: 6.497884034818003
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Learning the discriminative features of different faces is an important task
in face recognition. By extracting face features in neural networks, it becomes
easy to measure the similarity of different face images, which makes face
recognition possible. To enhance the neural network's face feature
separability, incorporating an angular margin during training is common
practice. State-of-the-art loss functions CosFace and ArcFace apply fixed
margins between weights of classes to enhance the inter-class separation of
face features. Since the distribution of samples in the training set is
imbalanced, similarities between different identities are unequal. Therefore,
using an inappropriately fixed angular margin may lead to the problem that the
model is difficult to converge or the face features are not discriminative
enough. It is more in line with our intuition that the margins are angular
adaptive, which could increase with the angles between classes growing. In this
paper, we propose a new angular margin loss named X2-Softmax. X2-Softmax loss
has adaptive angular margins, which provide the margin that increases with the
angle between different classes growing. The angular adaptive margin ensures
model flexibility and effectively improves the effect of face recognition. We
have trained the neural network with X2-Softmax loss on the MS1Mv3 dataset and
tested it on several evaluation benchmarks to demonstrate the effectiveness and
superiority of our loss function.
Related papers
- SymFace: Additional Facial Symmetry Loss for Deep Face Recognition [1.5612101323427952]
This research examines the natural phenomenon of facial symmetry in the face verification problem.
We show that the two output embedding vectors of split faces must project close to each other in the output embedding space.
Inspired by this concept, we penalize the network based on the disparity of embedding of the symmetrical pair of split faces.
arXiv Detail & Related papers (2024-09-18T09:06:55Z) - InterFace:Adjustable Angular Margin Inter-class Loss for Deep Face
Recognition [7.158500469489626]
We propose a novel loss function, InterFace, to improve the discriminative power of the model.
Our InterFace has advanced the state-of-the-art face recognition performance on five out of thirteen mainstream benchmarks.
arXiv Detail & Related papers (2022-10-05T04:38:29Z) - SubFace: Learning with Softmax Approximation for Face Recognition [3.262192371833866]
SubFace is a softmax approximation method that employs the subspace feature to promote the performance of face recognition.
Comprehensive experiments conducted on benchmark datasets demonstrate that our method can significantly improve the performance of vanilla CNN baseline.
arXiv Detail & Related papers (2022-08-24T12:31:08Z) - SFace: Sigmoid-Constrained Hypersphere Loss for Robust Face Recognition [74.13631562652836]
We propose a novel loss function, named sigmoid-constrained hypersphere loss (SFace)
SFace imposes intra-class and inter-class constraints on a hypersphere manifold, which are controlled by two sigmoid gradient re-scale functions respectively.
It can make a better balance between decreasing the intra-class distances and preventing overfitting to the label noise, and contributes more robust deep face recognition models.
arXiv Detail & Related papers (2022-05-24T11:54:15Z) - Meta Balanced Network for Fair Face Recognition [51.813457201437195]
We systematically and scientifically study bias from both data and algorithm aspects.
We propose a novel meta-learning algorithm, called Meta Balanced Network (MBN), which learns adaptive margins in large margin loss.
Extensive experiments show that MBN successfully mitigates bias and learns more balanced performance for people with different skin tones in face recognition.
arXiv Detail & Related papers (2022-05-13T10:25:44Z) - AdaFace: Quality Adaptive Margin for Face Recognition [56.99208144386127]
We introduce another aspect of adaptiveness in the loss function, namely the image quality.
We propose a new loss function that emphasizes samples of different difficulties based on their image quality.
Our method, AdaFace, improves the face recognition performance over the state-of-the-art (SoTA) on four datasets.
arXiv Detail & Related papers (2022-04-03T01:23:41Z) - ElasticFace: Elastic Margin Loss for Deep Face Recognition [6.865656740940772]
Learning discriminative face features plays a major role in building high-performing face recognition models.
Recent state-of-the-art face recognition solutions proposed to incorporate a fixed penalty margin on classification loss function, softmax loss.
We propose elastic margin loss (ElasticFace) that allows flexibility in the push for class separability.
arXiv Detail & Related papers (2021-09-20T10:31:50Z) - Frequency-aware Discriminative Feature Learning Supervised by
Single-Center Loss for Face Forgery Detection [89.43987367139724]
Face forgery detection is raising ever-increasing interest in computer vision.
Recent works have reached sound achievements, but there are still unignorable problems.
A novel frequency-aware discriminative feature learning framework is proposed in this paper.
arXiv Detail & Related papers (2021-03-16T14:17:17Z) - Partial FC: Training 10 Million Identities on a Single Machine [23.7030637489807]
We analyze the optimization goal of softmax-based loss functions and the difficulty of training massive identities.
Experiment demonstrates no loss of accuracy when training with only 10% randomly sampled classes for the softmax-based loss functions.
We also implement a very efficient distributed sampling algorithm, taking into account model accuracy and training efficiency.
arXiv Detail & Related papers (2020-10-11T11:15:26Z) - Loss Function Search for Face Recognition [75.79325080027908]
We develop a reward-guided search method to automatically obtain the best candidate.
Experimental results on a variety of face recognition benchmarks have demonstrated the effectiveness of our method.
arXiv Detail & Related papers (2020-07-10T03:40:10Z) - Multi-Margin based Decorrelation Learning for Heterogeneous Face
Recognition [90.26023388850771]
This paper presents a deep neural network approach to extract decorrelation representations in a hyperspherical space for cross-domain face images.
The proposed framework can be divided into two components: heterogeneous representation network and decorrelation representation learning.
Experimental results on two challenging heterogeneous face databases show that our approach achieves superior performance on both verification and recognition tasks.
arXiv Detail & Related papers (2020-05-25T07:01:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.