Multi-Scale Thermal to Visible Face Verification via Attribute Guided
Synthesis
- URL: http://arxiv.org/abs/2004.09502v2
- Date: Sun, 14 Feb 2021 01:50:08 GMT
- Title: Multi-Scale Thermal to Visible Face Verification via Attribute Guided
Synthesis
- Authors: Xing Di, Benjamin S. Riggan, Shuowen Hu, Nathaniel J. Short, Vishal M.
Patel
- Abstract summary: We use attributes extracted from visible images to synthesize attribute-preserved visible images from thermal imagery for cross-modal matching.
A novel multi-scale generator is proposed to synthesize the visible image from the thermal image guided by the extracted attributes.
A pre-trained VGG-Face network is leveraged to extract features from the synthesized image and the input visible image for verification.
- Score: 55.29770222566124
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Thermal-to-visible face verification is a challenging problem due to the
large domain discrepancy between the modalities. Existing approaches either
attempt to synthesize visible faces from thermal faces or learn
domain-invariant robust features from these modalities for cross-modal
matching. In this paper, we use attributes extracted from visible images to
synthesize attribute-preserved visible images from thermal imagery for
cross-modal matching. A pre-trained attribute predictor network is used to
extract the attributes from the visible image. Then, a novel multi-scale
generator is proposed to synthesize the visible image from the thermal image
guided by the extracted attributes. Finally, a pre-trained VGG-Face network is
leveraged to extract features from the synthesized image and the input visible
image for verification. Extensive experiments evaluated on three datasets (ARL
Face Database, Visible and Thermal Paired Face Database, and Tufts Face
Database) demonstrate that the proposed method achieves state-of-the-art
performance. In particular, it achieves around 2.41\%, 2.85\% and 1.77\%
improvements in Equal Error Rate (EER) over the state-of-the-art methods on the
ARL Face Database, Visible and Thermal Paired Face Database, and Tufts Face
Database, respectively. An extended dataset (ARL Face Dataset volume III)
consisting of polarimetric thermal faces of 121 subjects is also introduced in
this paper. Furthermore, an ablation study is conducted to demonstrate the
effectiveness of different modules in the proposed method.
Related papers
- Precise Facial Landmark Detection by Reference Heatmap Transformer [52.417964103227696]
We propose a novel Reference Heatmap Transformer (RHT) for more precise facial landmark detection.
The experimental results from challenging benchmark datasets demonstrate that our proposed method outperforms the state-of-the-art methods in the literature.
arXiv Detail & Related papers (2023-03-14T12:26:48Z) - Learning Domain and Pose Invariance for Thermal-to-Visible Face
Recognition [6.454199265634863]
We propose a novel Domain and Pose Invariant Framework that simultaneously learns domain and pose invariant representations.
Our proposed framework is composed of modified networks for extracting the most correlated intermediate representations from off-pose thermal and frontal visible face imagery.
Although DPIF focuses on learning to match off-pose thermal to frontal visible faces, we also show that DPIF enhances performance when matching frontal thermal face images to frontal visible face images.
arXiv Detail & Related papers (2022-11-17T05:24:02Z) - A Synthesis-Based Approach for Thermal-to-Visible Face Verification [105.63410428506536]
This paper presents an algorithm that achieves state-of-the-art performance on the ARL-VTF and TUFTS multi-spectral face datasets.
We also present MILAB-VTF(B), a challenging multi-spectral face dataset composed of paired thermal and visible videos.
arXiv Detail & Related papers (2021-08-21T17:59:56Z) - Simultaneous Face Hallucination and Translation for Thermal to Visible
Face Verification using Axial-GAN [74.22129648654783]
We introduce the task of thermal-to-visible face verification from low-resolution thermal images.
We propose Axial-Generative Adversarial Network (Axial-GAN) to synthesize high-resolution visible images for matching.
arXiv Detail & Related papers (2021-04-13T22:34:28Z) - A Large-Scale, Time-Synchronized Visible and Thermal Face Dataset [62.193924313292875]
We present the DEVCOM Army Research Laboratory Visible-Thermal Face dataset (ARL-VTF)
With over 500,000 images from 395 subjects, the ARL-VTF dataset represents to the best of our knowledge, the largest collection of paired visible and thermal face images to date.
This paper presents benchmark results and analysis on thermal face landmark detection and thermal-to-visible face verification by evaluating state-of-the-art models on the ARL-VTF dataset.
arXiv Detail & Related papers (2021-01-07T17:17:12Z) - Unsupervised Learning Facial Parameter Regressor for Action Unit
Intensity Estimation via Differentiable Renderer [51.926868759681014]
We present a framework to predict the facial parameters based on a bone-driven face model (BDFM) under different views.
The proposed framework consists of a feature extractor, a generator, and a facial parameter regressor.
arXiv Detail & Related papers (2020-08-20T09:49:13Z) - Unsupervised Facial Action Unit Intensity Estimation via Differentiable
Optimization [45.07851622835555]
We propose an unsupervised framework GE-Net for facial AU intensity estimation from a single image.
Our framework performs differentiable optimization, which iteratively updates the facial parameters to match the input image.
Experimental results demonstrate that our method can achieve state-of-the-art results compared with existing methods.
arXiv Detail & Related papers (2020-04-13T12:56:28Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.