Cross-Domain Identification for Thermal-to-Visible Face Recognition
- URL: http://arxiv.org/abs/2008.08473v1
- Date: Wed, 19 Aug 2020 14:24:04 GMT
- Title: Cross-Domain Identification for Thermal-to-Visible Face Recognition
- Authors: Cedric Nimpa Fondje, Shuowen Hu, Nathaniel J. Short, Benjamin S.
Riggan
- Abstract summary: This paper proposes a novel domain adaptation framework that combines a new feature mapping sub-network with existing deep feature models.
New cross-domain identity and domain invariance loss functions for thermal-to-visible face recognition alleviates the requirement for precisely co-registered and synchronized imagery.
We analyze the viability of the proposed framework for more challenging tasks, such as non-frontal thermal-to-visible face recognition.
- Score: 6.224425156703344
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Recent advances in domain adaptation, especially those applied to
heterogeneous facial recognition, typically rely upon restrictive Euclidean
loss functions (e.g., $L_2$ norm) which perform best when images from two
different domains (e.g., visible and thermal) are co-registered and temporally
synchronized. This paper proposes a novel domain adaptation framework that
combines a new feature mapping sub-network with existing deep feature models,
which are based on modified network architectures (e.g., VGG16 or Resnet50).
This framework is optimized by introducing new cross-domain identity and domain
invariance loss functions for thermal-to-visible face recognition, which
alleviates the requirement for precisely co-registered and synchronized
imagery. We provide extensive analysis of both features and loss functions
used, and compare the proposed domain adaptation framework with
state-of-the-art feature based domain adaptation models on a difficult dataset
containing facial imagery collected at varying ranges, poses, and expressions.
Moreover, we analyze the viability of the proposed framework for more
challenging tasks, such as non-frontal thermal-to-visible face recognition.
Related papers
- Heterogeneous Face Recognition Using Domain Invariant Units [4.910937238451485]
We leverage a pretrained face recognition model as a teacher network to learn domaininvariant network layers called Domain-Invariant Units (DIU)
The proposed DIU can be trained effectively even with a limited amount of paired training data, in a contrastive distillation framework.
This proposed approach has the potential to enhance pretrained models, making them more adaptable to a wider range of variations in data.
arXiv Detail & Related papers (2024-04-22T16:58:37Z) - Bridging the Gap: Heterogeneous Face Recognition with Conditional
Adaptive Instance Modulation [7.665392786787577]
We introduce a novel Conditional Adaptive Instance Modulation (CAIM) module that can be integrated into pre-trained Face Recognition networks.
The CAIM block modulates intermediate feature maps, to adapt the style of the target modality effectively bridging the domain gap.
Our proposed method allows for end-to-end training with a minimal number of paired samples.
arXiv Detail & Related papers (2023-07-13T19:17:04Z) - Adaptive Face Recognition Using Adversarial Information Network [57.29464116557734]
Face recognition models often degenerate when training data are different from testing data.
We propose a novel adversarial information network (AIN) to address it.
arXiv Detail & Related papers (2023-05-23T02:14:11Z) - SelFSR: Self-Conditioned Face Super-Resolution in the Wild via Flow
Field Degradation Network [12.976199676093442]
We propose a novel domain-adaptive degradation network for face super-resolution in the wild.
Our model achieves state-of-the-art performance on both CelebA and real-world face dataset.
arXiv Detail & Related papers (2021-12-20T17:04:00Z) - A Unified Architecture of Semantic Segmentation and Hierarchical
Generative Adversarial Networks for Expression Manipulation [52.911307452212256]
We develop a unified architecture of semantic segmentation and hierarchical GANs.
A unique advantage of our framework is that on forward pass the semantic segmentation network conditions the generative model.
We evaluate our method on two challenging facial expression translation benchmarks, AffectNet and RaFD, and a semantic segmentation benchmark, CelebAMask-HQ.
arXiv Detail & Related papers (2021-12-08T22:06:31Z) - Spatially-Adaptive Image Restoration using Distortion-Guided Networks [51.89245800461537]
We present a learning-based solution for restoring images suffering from spatially-varying degradations.
We propose SPAIR, a network design that harnesses distortion-localization information and dynamically adjusts to difficult regions in the image.
arXiv Detail & Related papers (2021-08-19T11:02:25Z) - Heterogeneous Face Frontalization via Domain Agnostic Learning [74.86585699909459]
We propose a domain agnostic learning-based generative adversarial network (DAL-GAN) which can synthesize frontal views in the visible domain from thermal faces with pose variations.
DAL-GAN consists of a generator with an auxiliary classifier and two discriminators which capture both local and global texture discriminations for better synthesis.
arXiv Detail & Related papers (2021-07-17T20:41:41Z) - Simultaneous Face Hallucination and Translation for Thermal to Visible
Face Verification using Axial-GAN [74.22129648654783]
We introduce the task of thermal-to-visible face verification from low-resolution thermal images.
We propose Axial-Generative Adversarial Network (Axial-GAN) to synthesize high-resolution visible images for matching.
arXiv Detail & Related papers (2021-04-13T22:34:28Z) - A NIR-to-VIS face recognition via part adaptive and relation attention
module [4.822208985805956]
In the face recognition application scenario, we need to process facial images captured in various conditions, such as at night by near-infrared (NIR) surveillance cameras.
The illumination difference between NIR and visible-light (VIS) causes a domain gap between facial images, and the variations in pose and emotion also make facial matching more difficult.
We propose a part relation attention module that crops facial parts obtained through a semantic mask and performs relational modeling using each of these representative features.
arXiv Detail & Related papers (2021-02-01T08:13:39Z) - Phase Consistent Ecological Domain Adaptation [76.75730500201536]
We focus on the task of semantic segmentation, where annotated synthetic data are aplenty, but annotating real data is laborious.
The first criterion, inspired by visual psychophysics, is that the map between the two image domains be phase-preserving.
The second criterion aims to leverage ecological statistics, or regularities in the scene which are manifest in any image of it, regardless of the characteristics of the illuminant or the imaging sensor.
arXiv Detail & Related papers (2020-04-10T06:58:03Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.