DBLFace: Domain-Based Labels for NIR-VIS Heterogeneous Face Recognition
- URL: http://arxiv.org/abs/2010.03771v1
- Date: Thu, 8 Oct 2020 05:22:47 GMT
- Title: DBLFace: Domain-Based Labels for NIR-VIS Heterogeneous Face Recognition
- Authors: Ha Le and Ioannis A. Kakadiaris
- Abstract summary: Domain-Based Label Face (DBLFace) is a learning approach based on the assumption that a subject is not represented by a single label but by a set of labels.
In particular, a set of two labels per subject, one for the NIR images and one for the VIS images, are used for training a NIR-VIS face recognition model.
DBLFace significantly improves the rank-1 identification rate by 6.7% on the EDGE20 dataset and achieves state-of-the-art performance on the CASIA NIR-VIS 2.0 dataset.
- Score: 5.076419064097733
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Deep learning-based domain-invariant feature learning methods are advancing
in near-infrared and visible (NIR-VIS) heterogeneous face recognition. However,
these methods are prone to overfitting due to the large intra-class variation
and the lack of NIR images for training. In this paper, we introduce
Domain-Based Label Face (DBLFace), a learning approach based on the assumption
that a subject is not represented by a single label but by a set of labels.
Each label represents images of a specific domain. In particular, a set of two
labels per subject, one for the NIR images and one for the VIS images, are used
for training a NIR-VIS face recognition model. The classification of images
into different domains reduces the intra-class variation and lessens the
negative impact of data imbalance in training. To train a network with sets of
labels, we introduce a domain-based angular margin loss and a maximum angular
loss to maintain the inter-class discrepancy and to enforce the close
relationship of labels in a set. Quantitative experiments confirm that DBLFace
significantly improves the rank-1 identification rate by 6.7% on the EDGE20
dataset and achieves state-of-the-art performance on the CASIA NIR-VIS 2.0
dataset.
Related papers
- Rethinking the Domain Gap in Near-infrared Face Recognition [65.7871950460781]
Heterogeneous face recognition (HFR) involves the intricate task of matching face images across the visual domains of visible (VIS) and near-infrared (NIR)
Much of the existing literature on HFR identifies the domain gap as a primary challenge and directs efforts towards bridging it at either the input or feature level.
We observe that large neural networks, unlike their smaller counterparts, when pre-trained on large scale homogeneous VIS data, demonstrate exceptional zero-shot performance in HFR.
arXiv Detail & Related papers (2023-12-01T14:43:28Z) - Fine-grained Recognition with Learnable Semantic Data Augmentation [68.48892326854494]
Fine-grained image recognition is a longstanding computer vision challenge.
We propose diversifying the training data at the feature-level to alleviate the discriminative region loss problem.
Our method significantly improves the generalization performance on several popular classification networks.
arXiv Detail & Related papers (2023-09-01T11:15:50Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - Physically-Based Face Rendering for NIR-VIS Face Recognition [165.54414962403555]
Near infrared (NIR) to Visible (VIS) face matching is challenging due to the significant domain gaps.
We propose a novel method for paired NIR-VIS facial image generation.
To facilitate the identity feature learning, we propose an IDentity-based Maximum Mean Discrepancy (ID-MMD) loss.
arXiv Detail & Related papers (2022-11-11T18:48:16Z) - NIR-to-VIS Face Recognition via Embedding Relations and Coordinates of
the Pairwise Features [5.044100238869375]
We propose a 'Relation Module' which can simply add-on to any face recognition models.
The local features extracted from face image contain information of each component of the face.
With the proposed module, we achieve 14.81% rank-1 accuracy and 15.47% verification rate of 0.1% FAR improvements.
arXiv Detail & Related papers (2022-08-04T02:53:44Z) - Learning Discriminative Representations for Multi-Label Image
Recognition [13.13795708478267]
We propose a unified deep network to learn discriminative features for the multi-label task.
By regularizing the whole network with the proposed loss, the performance of applying the wellknown ResNet-101 is improved significantly.
arXiv Detail & Related papers (2021-07-23T12:10:46Z) - SSKD: Self-Supervised Knowledge Distillation for Cross Domain Adaptive
Person Re-Identification [25.96221714337815]
Domain adaptive person re-identification (re-ID) is a challenging task due to the large discrepancy between the source domain and the target domain.
Existing methods mainly attempt to generate pseudo labels for unlabeled target images by clustering algorithms.
We propose a Self-Supervised Knowledge Distillation (SSKD) technique containing two modules, the identity learning and the soft label learning.
arXiv Detail & Related papers (2020-09-13T10:12:02Z) - Instance-Aware Graph Convolutional Network for Multi-Label
Classification [55.131166957803345]
Graph convolutional neural network (GCN) has effectively boosted the multi-label image recognition task.
We propose an instance-aware graph convolutional neural network (IA-GCN) framework for multi-label classification.
arXiv Detail & Related papers (2020-08-19T12:49:28Z) - High-Order Information Matters: Learning Relation and Topology for
Occluded Person Re-Identification [84.43394420267794]
We propose a novel framework by learning high-order relation and topology information for discriminative features and robust alignment.
Our framework significantly outperforms state-of-the-art by6.5%mAP scores on Occluded-Duke dataset.
arXiv Detail & Related papers (2020-03-18T12:18:35Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.