DVG-Face: Dual Variational Generation for Heterogeneous Face Recognition
- URL: http://arxiv.org/abs/2009.09399v2
- Date: Sat, 16 Jan 2021 10:39:50 GMT
- Title: DVG-Face: Dual Variational Generation for Heterogeneous Face Recognition
- Authors: Chaoyou Fu, Xiang Wu, Yibo Hu, Huaibo Huang, Ran He
- Abstract summary: We formulate HFR as a dual generation problem, and tackle it via a novel Dual Variational Generation (DVG-Face) framework.
We integrate abundant identity information of large-scale visible data into the joint distribution.
Massive new diverse paired heterogeneous images with the same identity can be generated from noises.
- Score: 85.94331736287765
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Heterogeneous Face Recognition (HFR) refers to matching cross-domain faces
and plays a crucial role in public security. Nevertheless, HFR is confronted
with challenges from large domain discrepancy and insufficient heterogeneous
data. In this paper, we formulate HFR as a dual generation problem, and tackle
it via a novel Dual Variational Generation (DVG-Face) framework. Specifically,
a dual variational generator is elaborately designed to learn the joint
distribution of paired heterogeneous images. However, the small-scale paired
heterogeneous training data may limit the identity diversity of sampling. In
order to break through the limitation, we propose to integrate abundant
identity information of large-scale visible data into the joint distribution.
Furthermore, a pairwise identity preserving loss is imposed on the generated
paired heterogeneous images to ensure their identity consistency. As a
consequence, massive new diverse paired heterogeneous images with the same
identity can be generated from noises. The identity consistency and identity
diversity properties allow us to employ these generated images to train the HFR
network via a contrastive learning mechanism, yielding both domain-invariant
and discriminative embedding features. Concretely, the generated paired
heterogeneous images are regarded as positive pairs, and the images obtained
from different samplings are considered as negative pairs. Our method achieves
superior performances over state-of-the-art methods on seven challenging
databases belonging to five HFR tasks, including NIR-VIS, Sketch-Photo,
Profile-Frontal Photo, Thermal-VIS, and ID-Camera. The related code will be
released at https://github.com/BradyFU.
Related papers
- ID$^3$: Identity-Preserving-yet-Diversified Diffusion Models for Synthetic Face Recognition [60.15830516741776]
Synthetic face recognition (SFR) aims to generate datasets that mimic the distribution of real face data.
We introduce a diffusion-fueled SFR model termed $textID3$.
$textID3$ employs an ID-preserving loss to generate diverse yet identity-consistent facial appearances.
arXiv Detail & Related papers (2024-09-26T06:46:40Z) - Exploring Invariant Representation for Visible-Infrared Person
Re-Identification [77.06940947765406]
Cross-spectral person re-identification, which aims to associate identities to pedestrians across different spectra, faces a main challenge of the modality discrepancy.
In this paper, we address the problem from both image-level and feature-level in an end-to-end hybrid learning framework named robust feature mining network (RFM)
Experiment results on two standard cross-spectral person re-identification datasets, RegDB and SYSU-MM01, have demonstrated state-of-the-art performance.
arXiv Detail & Related papers (2023-02-02T05:24:50Z) - Hierarchical Forgery Classifier On Multi-modality Face Forgery Clues [61.37306431455152]
We propose a novel Hierarchical Forgery for Multi-modality Face Forgery Detection (HFC-MFFD)
The HFC-MFFD learns robust patches-based hybrid representation to enhance forgery authentication in multiple-modality scenarios.
The specific hierarchical face forgery is proposed to alleviate the class imbalance problem and further boost detection performance.
arXiv Detail & Related papers (2022-12-30T10:54:29Z) - T-Person-GAN: Text-to-Person Image Generation with Identity-Consistency
and Manifold Mix-Up [16.165889084870116]
We present an end-to-end approach to generate high-resolution person images conditioned on texts only.
We develop an effective generative model to produce person images with two novel mechanisms.
arXiv Detail & Related papers (2022-08-18T07:41:02Z) - Heterogeneous Face Recognition via Face Synthesis with
Identity-Attribute Disentanglement [33.42679052386639]
Heterogeneous Face Recognition (HFR) aims to match faces across different domains.
We propose a new HFR method named Face Synthesis with Identity-Attribute Disentanglement (FSIAD)
FSIAD decouples face images into identity-related representations and identity-unrelated representations (called attributes)
arXiv Detail & Related papers (2022-06-10T03:01:33Z) - Dual Spoof Disentanglement Generation for Face Anti-spoofing with Depth
Uncertainty Learning [54.15303628138665]
Face anti-spoofing (FAS) plays a vital role in preventing face recognition systems from presentation attacks.
Existing face anti-spoofing datasets lack diversity due to the insufficient identity and insignificant variance.
We propose Dual Spoof Disentanglement Generation framework to tackle this challenge by "anti-spoofing via generation"
arXiv Detail & Related papers (2021-12-01T15:36:59Z) - Multi-Margin based Decorrelation Learning for Heterogeneous Face
Recognition [90.26023388850771]
This paper presents a deep neural network approach to extract decorrelation representations in a hyperspherical space for cross-domain face images.
The proposed framework can be divided into two components: heterogeneous representation network and decorrelation representation learning.
Experimental results on two challenging heterogeneous face databases show that our approach achieves superior performance on both verification and recognition tasks.
arXiv Detail & Related papers (2020-05-25T07:01:12Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.