Relational Deep Feature Learning for Heterogeneous Face Recognition
- URL: http://arxiv.org/abs/2003.00697v3
- Date: Tue, 14 Jul 2020 11:06:22 GMT
- Title: Relational Deep Feature Learning for Heterogeneous Face Recognition
- Authors: MyeongAh Cho, Taeoh Kim, Ig-Jae Kim, Kyungjae Lee, and Sangyoun Lee
- Abstract summary: We propose a graph-structured module called Graph Module (NIR) that extracts global relational information in addition to general facial features.
The proposed method outperforms other state-of-the-art methods on five Heterogeneous Face Recognition (HFR) databases.
- Score: 17.494718795454055
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heterogeneous Face Recognition (HFR) is a task that matches faces across two
different domains such as visible light (VIS), near-infrared (NIR), or the
sketch domain. Due to the lack of databases, HFR methods usually exploit the
pre-trained features on a large-scale visual database that contain general
facial information. However, these pre-trained features cause performance
degradation due to the texture discrepancy with the visual domain. With this
motivation, we propose a graph-structured module called Relational Graph Module
(RGM) that extracts global relational information in addition to general facial
features. Because each identity's relational information between intra-facial
parts is similar in any modality, the modeling relationship between features
can help cross-domain matching. Through the RGM, relation propagation
diminishes texture dependency without losing its advantages from the
pre-trained features. Furthermore, the RGM captures global facial geometrics
from locally correlated convolutional features to identify long-range
relationships. In addition, we propose a Node Attention Unit (NAU) that
performs node-wise recalibration to concentrate on the more informative nodes
arising from relation-based propagation. Furthermore, we suggest a novel
conditional-margin loss function (C-softmax) for the efficient projection
learning of the embedding vector in HFR. The proposed method outperforms other
state-of-the-art methods on five HFR databases. Furthermore, we demonstrate
performance improvement on three backbones because our module can be plugged
into any pre-trained face recognition backbone to overcome the limitations of a
small HFR database.
Related papers
- UGMAE: A Unified Framework for Graph Masked Autoencoders [67.75493040186859]
We propose UGMAE, a unified framework for graph masked autoencoders.
We first develop an adaptive feature mask generator to account for the unique significance of nodes.
We then design a ranking-based structure reconstruction objective joint with feature reconstruction to capture holistic graph information.
arXiv Detail & Related papers (2024-02-12T19:39:26Z) - Rethinking the Domain Gap in Near-infrared Face Recognition [65.7871950460781]
Heterogeneous face recognition (HFR) involves the intricate task of matching face images across the visual domains of visible (VIS) and near-infrared (NIR)
Much of the existing literature on HFR identifies the domain gap as a primary challenge and directs efforts towards bridging it at either the input or feature level.
We observe that large neural networks, unlike their smaller counterparts, when pre-trained on large scale homogeneous VIS data, demonstrate exceptional zero-shot performance in HFR.
arXiv Detail & Related papers (2023-12-01T14:43:28Z) - Trading-off Mutual Information on Feature Aggregation for Face
Recognition [12.803514943105657]
We propose a technique to aggregate the outputs of two state-of-the-art (SOTA) deep Face Recognition (FR) models.
In our approach, we leverage the transformer attention mechanism to exploit the relationship between different parts of two feature maps.
To evaluate the effectiveness of our proposed method, we conducted experiments on popular benchmarks and compared our results with state-of-the-art algorithms.
arXiv Detail & Related papers (2023-09-22T18:48:38Z) - NIR-to-VIS Face Recognition via Embedding Relations and Coordinates of
the Pairwise Features [5.044100238869375]
We propose a 'Relation Module' which can simply add-on to any face recognition models.
The local features extracted from face image contain information of each component of the face.
With the proposed module, we achieve 14.81% rank-1 accuracy and 15.47% verification rate of 0.1% FAR improvements.
arXiv Detail & Related papers (2022-08-04T02:53:44Z) - G$^2$DA: Geometry-Guided Dual-Alignment Learning for RGB-Infrared Person
Re-Identification [3.909938091041451]
RGB-IR person re-identification aims to retrieve person-of-interest between heterogeneous modalities.
This paper presents a Geometry-Guided Dual-Alignment learning framework (G$2$DA) to tackle sample-level modality difference.
arXiv Detail & Related papers (2021-06-15T03:14:31Z) - A-FMI: Learning Attributions from Deep Networks via Feature Map
Importance [58.708607977437794]
Gradient-based attribution methods can aid in the understanding of convolutional neural networks (CNNs)
The redundancy of attribution features and the gradient saturation problem are challenges that attribution methods still face.
We propose a new concept, feature map importance (FMI), to refine the contribution of each feature map, and a novel attribution method via FMI, to address the gradient saturation problem.
arXiv Detail & Related papers (2021-04-12T14:54:44Z) - Cross-Domain Facial Expression Recognition: A Unified Evaluation
Benchmark and Adversarial Graph Learning [85.6386289476598]
We develop a novel adversarial graph representation adaptation (AGRA) framework for cross-domain holistic-local feature co-adaptation.
We conduct extensive and fair evaluations on several popular benchmarks and show that the proposed AGRA framework outperforms previous state-of-the-art methods.
arXiv Detail & Related papers (2020-08-03T15:00:31Z) - High-Order Information Matters: Learning Relation and Topology for
Occluded Person Re-Identification [84.43394420267794]
We propose a novel framework by learning high-order relation and topology information for discriminative features and robust alignment.
Our framework significantly outperforms state-of-the-art by6.5%mAP scores on Occluded-Duke dataset.
arXiv Detail & Related papers (2020-03-18T12:18:35Z) - Graph Representation Learning via Graphical Mutual Information
Maximization [86.32278001019854]
We propose a novel concept, Graphical Mutual Information (GMI), to measure the correlation between input graphs and high-level hidden representations.
We develop an unsupervised learning model trained by maximizing GMI between the input and output of a graph neural encoder.
arXiv Detail & Related papers (2020-02-04T08:33:49Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.