Improving 2D face recognition via fine-level facial depth generation and
RGB-D complementary feature learning
- URL: http://arxiv.org/abs/2305.04426v1
- Date: Mon, 8 May 2023 02:33:59 GMT
- Title: Improving 2D face recognition via fine-level facial depth generation and
RGB-D complementary feature learning
- Authors: Wenhao Hu
- Abstract summary: We propose a fine-grained facial depth generation network and an improved multimodal complementary feature learning network.
Experiments on the Lock3DFace dataset and the IIIT-D dataset show that the proposed FFDGNet and I MCFLNet can improve the accuracy of RGB-D face recognition.
- Score: 0.8223798883838329
- License: http://creativecommons.org/licenses/by/4.0/
- Abstract: Face recognition in complex scenes suffers severe challenges coming from
perturbations such as pose deformation, ill illumination, partial occlusion.
Some methods utilize depth estimation to obtain depth corresponding to RGB to
improve the accuracy of face recognition. However, the depth generated by them
suffer from image blur, which introduces noise in subsequent RGB-D face
recognition tasks. In addition, existing RGB-D face recognition methods are
unable to fully extract complementary features. In this paper, we propose a
fine-grained facial depth generation network and an improved multimodal
complementary feature learning network. Extensive experiments on the Lock3DFace
dataset and the IIIT-D dataset show that the proposed FFDGNet and I MCFLNet can
improve the accuracy of RGB-D face recognition while achieving the
state-of-the-art performance.
Related papers
- Confidence-Aware RGB-D Face Recognition via Virtual Depth Synthesis [48.59382455101753]
2D face recognition encounters challenges in unconstrained environments due to varying illumination, occlusion, and pose.
Recent studies focus on RGB-D face recognition to improve robustness by incorporating depth information.
In this work, we first construct a diverse depth dataset generated by 3D Morphable Models for depth model pre-training.
Then, we propose a domain-independent pre-training framework that utilizes readily available pre-trained RGB and depth models to separately perform face recognition without needing additional paired data for retraining.
arXiv Detail & Related papers (2024-03-11T09:12:24Z) - Symmetric Uncertainty-Aware Feature Transmission for Depth
Super-Resolution [52.582632746409665]
We propose a novel Symmetric Uncertainty-aware Feature Transmission (SUFT) for color-guided DSR.
Our method achieves superior performance compared to state-of-the-art methods.
arXiv Detail & Related papers (2023-06-01T06:35:59Z) - Physically-Based Face Rendering for NIR-VIS Face Recognition [165.54414962403555]
Near infrared (NIR) to Visible (VIS) face matching is challenging due to the significant domain gaps.
We propose a novel method for paired NIR-VIS facial image generation.
To facilitate the identity feature learning, we propose an IDentity-based Maximum Mean Discrepancy (ID-MMD) loss.
arXiv Detail & Related papers (2022-11-11T18:48:16Z) - High-Accuracy RGB-D Face Recognition via Segmentation-Aware Face Depth
Estimation and Mask-Guided Attention Network [16.50097148165777]
Deep learning approaches have achieved highly accurate face recognition by training the models with very large face image datasets.
Unlike the availability of large 2D face image datasets, there is a lack of large 3D face datasets available to the public.
This paper proposes two CNN models to improve the RGB-D face recognition task.
arXiv Detail & Related papers (2021-12-22T07:46:23Z) - Total Scale: Face-to-Body Detail Reconstruction from Sparse RGBD Sensors [52.38220261632204]
Flat facial surfaces frequently occur in the PIFu-based reconstruction results.
We propose a two-scale PIFu representation to enhance the quality of the reconstructed facial details.
Experiments demonstrate the effectiveness of our approach in vivid facial details and deforming body shapes.
arXiv Detail & Related papers (2021-12-03T18:46:49Z) - Depth as Attention for Face Representation Learning [11.885178256393893]
A novel depth-guided attention mechanism is proposed for deep multi-modal face recognition using low-cost RGB-D sensors.
Our solution achieves average (increased) accuracies of 87.3% (+5.0%), 99.1% (+0.9%), 99.7% (+0.6%) and 95.3%(+0.5%) for the four datasets respectively.
arXiv Detail & Related papers (2021-01-03T16:19:34Z) - 3D Dense Geometry-Guided Facial Expression Synthesis by Adversarial
Learning [54.24887282693925]
We propose a novel framework to exploit 3D dense (depth and surface normals) information for expression manipulation.
We use an off-the-shelf state-of-the-art 3D reconstruction model to estimate the depth and create a large-scale RGB-Depth dataset.
Our experiments demonstrate that the proposed method outperforms the competitive baseline and existing arts by a large margin.
arXiv Detail & Related papers (2020-09-30T17:12:35Z) - Differential 3D Facial Recognition: Adding 3D to Your State-of-the-Art
2D Method [90.26041504667451]
We show that it is possible to adopt active illumination to enhance state-of-the-art 2D face recognition approaches with 3D features.
The proposed ideas can significantly boost face recognition performance and dramatically improve the robustness to spoofing attacks.
arXiv Detail & Related papers (2020-04-03T20:17:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.