Learning Domain and Pose Invariance for Thermal-to-Visible Face
Recognition
- URL: http://arxiv.org/abs/2211.09350v1
- Date: Thu, 17 Nov 2022 05:24:02 GMT
- Title: Learning Domain and Pose Invariance for Thermal-to-Visible Face
Recognition
- Authors: Cedric Nimpa Fondje and Shuowen Hu and Benjamin S. Riggan
- Abstract summary: We propose a novel Domain and Pose Invariant Framework that simultaneously learns domain and pose invariant representations.
Our proposed framework is composed of modified networks for extracting the most correlated intermediate representations from off-pose thermal and frontal visible face imagery.
Although DPIF focuses on learning to match off-pose thermal to frontal visible faces, we also show that DPIF enhances performance when matching frontal thermal face images to frontal visible face images.
- Score: 6.454199265634863
- License: http://creativecommons.org/licenses/by-sa/4.0/
- Abstract: Interest in thermal to visible face recognition has grown significantly over
the last decade due to advancements in thermal infrared cameras and analytics
beyond the visible spectrum. Despite large discrepancies between thermal and
visible spectra, existing approaches bridge domain gaps by either synthesizing
visible faces from thermal faces or by learning the cross-spectrum image
representations. These approaches typically work well with frontal facial
imagery collected at varying ranges and expressions, but exhibit significantly
reduced performance when matching thermal faces with varying poses to frontal
visible faces. We propose a novel Domain and Pose Invariant Framework that
simultaneously learns domain and pose invariant representations. Our proposed
framework is composed of modified networks for extracting the most correlated
intermediate representations from off-pose thermal and frontal visible face
imagery, a sub-network to jointly bridge domain and pose gaps, and a joint-loss
function comprised of cross-spectrum and pose-correction losses. We demonstrate
efficacy and advantages of the proposed method by evaluating on three
thermal-visible datasets: ARL Visible-to-Thermal Face, ARL Multimodal Face, and
Tufts Face. Although DPIF focuses on learning to match off-pose thermal to
frontal visible faces, we also show that DPIF enhances performance when
matching frontal thermal face images to frontal visible face images.
Related papers
- When Visible-to-Thermal Facial GAN Beats Conditional Diffusion [36.33347149799959]
Telemedicine applications could benefit from thermal imagery, but conventional computers are reliant on RGB cameras and lack thermal sensors.
We propose the Visible-to-Thermal Facial GAN (VTF-GAN) that is specifically designed to generate high-resolution thermal faces.
Results show that VTF-GAN achieves high quality, crisp, and perceptually realistic thermal faces using a combined set of patch, temperature, perceptual, and Fourier Transform losses.
arXiv Detail & Related papers (2023-02-18T18:02:31Z) - A Synthesis-Based Approach for Thermal-to-Visible Face Verification [105.63410428506536]
This paper presents an algorithm that achieves state-of-the-art performance on the ARL-VTF and TUFTS multi-spectral face datasets.
We also present MILAB-VTF(B), a challenging multi-spectral face dataset composed of paired thermal and visible videos.
arXiv Detail & Related papers (2021-08-21T17:59:56Z) - Heterogeneous Face Frontalization via Domain Agnostic Learning [74.86585699909459]
We propose a domain agnostic learning-based generative adversarial network (DAL-GAN) which can synthesize frontal views in the visible domain from thermal faces with pose variations.
DAL-GAN consists of a generator with an auxiliary classifier and two discriminators which capture both local and global texture discriminations for better synthesis.
arXiv Detail & Related papers (2021-07-17T20:41:41Z) - Simultaneous Face Hallucination and Translation for Thermal to Visible
Face Verification using Axial-GAN [74.22129648654783]
We introduce the task of thermal-to-visible face verification from low-resolution thermal images.
We propose Axial-Generative Adversarial Network (Axial-GAN) to synthesize high-resolution visible images for matching.
arXiv Detail & Related papers (2021-04-13T22:34:28Z) - A Large-Scale, Time-Synchronized Visible and Thermal Face Dataset [62.193924313292875]
We present the DEVCOM Army Research Laboratory Visible-Thermal Face dataset (ARL-VTF)
With over 500,000 images from 395 subjects, the ARL-VTF dataset represents to the best of our knowledge, the largest collection of paired visible and thermal face images to date.
This paper presents benchmark results and analysis on thermal face landmark detection and thermal-to-visible face verification by evaluating state-of-the-art models on the ARL-VTF dataset.
arXiv Detail & Related papers (2021-01-07T17:17:12Z) - Multi-Scale Thermal to Visible Face Verification via Attribute Guided
Synthesis [55.29770222566124]
We use attributes extracted from visible images to synthesize attribute-preserved visible images from thermal imagery for cross-modal matching.
A novel multi-scale generator is proposed to synthesize the visible image from the thermal image guided by the extracted attributes.
A pre-trained VGG-Face network is leveraged to extract features from the synthesized image and the input visible image for verification.
arXiv Detail & Related papers (2020-04-20T01:45:05Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.