3D Face Alignment Through Fusion of Head Pose Information and Features
- URL: http://arxiv.org/abs/2308.13327v1
- Date: Fri, 25 Aug 2023 12:01:24 GMT
- Title: 3D Face Alignment Through Fusion of Head Pose Information and Features
- Authors: Jaehyun So, Youngjoon Han
- Abstract summary: We propose a novel method that employs head pose information to improve face alignment performance.
The proposed network structure performs robust face alignment through a dual-dimensional network.
We experimentally assessed the correlation between the predicted facial landmarks and head pose information, as well as variations in the accuracy of facial landmarks.
- Score: 0.6526824510982799
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The ability of humans to infer head poses from face shapes, and vice versa,
indicates a strong correlation between the two. Accordingly, recent studies on
face alignment have employed head pose information to predict facial landmarks
in computer vision tasks. In this study, we propose a novel method that employs
head pose information to improve face alignment performance by fusing said
information with the feature maps of a face alignment network, rather than
simply using it to initialize facial landmarks. Furthermore, the proposed
network structure performs robust face alignment through a dual-dimensional
network using multidimensional features represented by 2D feature maps and a 3D
heatmap. For effective dense face alignment, we also propose a prediction
method for facial geometric landmarks through training based on knowledge
distillation using predicted keypoints. We experimentally assessed the
correlation between the predicted facial landmarks and head pose information,
as well as variations in the accuracy of facial landmarks with respect to the
quality of head pose information. In addition, we demonstrated the
effectiveness of the proposed method through a competitive performance
comparison with state-of-the-art methods on the AFLW2000-3D, AFLW, and BIWI
datasets.
Related papers
- Analyzing the Impact of Shape & Context on the Face Recognition
Performance of Deep Networks [2.0099255688059907]
We analyze how changing the underlying 3D shape of the base identity in face images can distort their overall appearance.
Our experiments demonstrate the significance of facial shape in accurate face matching and underpin the importance of contextual data for network training.
arXiv Detail & Related papers (2022-08-05T05:32:07Z) - FaceTuneGAN: Face Autoencoder for Convolutional Expression Transfer
Using Neural Generative Adversarial Networks [0.7043489166804575]
We present FaceTuneGAN, a new 3D face model representation decomposing and encoding separately facial identity and facial expression.
We propose a first adaptation of image-to-image translation networks, that have successfully been used in the 2D domain, to 3D face geometry.
arXiv Detail & Related papers (2021-12-01T14:42:03Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Robust Face-Swap Detection Based on 3D Facial Shape Information [59.32489266682952]
Face-swap images and videos have attracted more and more malicious attackers to discredit some key figures.
Previous pixel-level artifacts based detection techniques always focus on some unclear patterns but ignore some available semantic clues.
We propose a biometric information based method to fully exploit the appearance and shape feature for face-swap detection of key figures.
arXiv Detail & Related papers (2021-04-28T09:35:48Z) - An Efficient Multitask Neural Network for Face Alignment, Head Pose
Estimation and Face Tracking [9.39854778804018]
We propose an efficient multitask face alignment, face tracking and head pose estimation network (ATPN)
ATPN achieves improved performance compared to previous state-of-the-art methods while having less number of parameters and FLOPS.
arXiv Detail & Related papers (2021-03-13T04:41:15Z) - Learning 3D Face Reconstruction with a Pose Guidance Network [49.13404714366933]
We present a self-supervised learning approach to learning monocular 3D face reconstruction with a pose guidance network (PGN)
First, we unveil the bottleneck of pose estimation in prior parametric 3D face learning methods, and propose to utilize 3D face landmarks for estimating pose parameters.
With our specially designed PGN, our model can learn from both faces with fully labeled 3D landmarks and unlimited unlabeled in-the-wild face images.
arXiv Detail & Related papers (2020-10-09T06:11:17Z) - Face Super-Resolution Guided by 3D Facial Priors [92.23902886737832]
We propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
Our work is the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes.
The proposed 3D priors achieve superior face super-resolution results over the state-of-the-arts.
arXiv Detail & Related papers (2020-07-18T15:26:07Z) - Dual-Attention GAN for Large-Pose Face Frontalization [59.689836951934694]
We present a novel Dual-Attention Generative Adversarial Network (DA-GAN) for photo-realistic face frontalization.
Specifically, a self-attention-based generator is introduced to integrate local features with their long-range dependencies.
A novel face-attention-based discriminator is applied to emphasize local features of face regions.
arXiv Detail & Related papers (2020-02-17T20:00:56Z) - It's Written All Over Your Face: Full-Face Appearance-Based Gaze
Estimation [82.16380486281108]
We propose an appearance-based method that only takes the full face image as input.
Our method encodes the face image using a convolutional neural network with spatial weights applied on the feature maps.
We show that our full-face method significantly outperforms the state of the art for both 2D and 3D gaze estimation.
arXiv Detail & Related papers (2016-11-27T15:00:10Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.