3D-Aided Data Augmentation for Robust Face Understanding
- URL: http://arxiv.org/abs/2010.01246v2
- Date: Tue, 6 Oct 2020 02:55:36 GMT
- Title: 3D-Aided Data Augmentation for Robust Face Understanding
- Authors: Yifan Xing, Yuanjun Xiong, Wei Xia
- Abstract summary: We propose a method that produces realistic 3D augmented images from multiple viewpoints with different illumination conditions through 3D face modeling.
Experiments demonstrate that the proposed 3D data augmentation method significantly improves the performance and robustness of various face understanding tasks.
- Score: 40.73929372872909
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Data augmentation has been highly effective in narrowing the data gap and
reducing the cost for human annotation, especially for tasks where ground truth
labels are difficult and expensive to acquire. In face recognition, large pose
and illumination variation of face images has been a key factor for performance
degradation. However, human annotation for the various face understanding tasks
including face landmark localization, face attributes classification and face
recognition under these challenging scenarios are highly costly to acquire.
Therefore, it would be desirable to perform data augmentation for these cases.
But simple 2D data augmentation techniques on the image domain are not able to
satisfy the requirement of these challenging cases. As such, 3D face modeling,
in particular, single image 3D face modeling, stands a feasible solution for
these challenging conditions beyond 2D based data augmentation. To this end, we
propose a method that produces realistic 3D augmented images from multiple
viewpoints with different illumination conditions through 3D face modeling,
each associated with geometrically accurate face landmarks, attributes and
identity information. Experiments demonstrate that the proposed 3D data
augmentation method significantly improves the performance and robustness of
various face understanding tasks while achieving state-of-arts on multiple
benchmarks.
Related papers
- TCDiff: Triple Condition Diffusion Model with 3D Constraints for Stylizing Synthetic Faces [1.7535229154829601]
Face recognition experiments using 1k, 2k, and 5k classes of our new dataset for training outperform state-of-the-art synthetic datasets in real face benchmarks.
arXiv Detail & Related papers (2024-09-05T14:59:41Z) - EFHQ: Multi-purpose ExtremePose-Face-HQ dataset [1.8194090162317431]
This work introduces a novel dataset named Extreme Pose Face High-Quality dataset (EFHQ), which includes a maximum of 450k high-quality images of faces at extreme poses.
To produce such a massive dataset, we utilize a novel and meticulous dataset processing pipeline to curate two publicly available datasets.
Our dataset can complement existing datasets on various facial-related tasks, such as facial synthesis with 2D/3D-aware GAN, diffusion-based text-to-image face generation, and face reenactment.
arXiv Detail & Related papers (2023-12-28T18:40:31Z) - Controllable 3D Face Generation with Conditional Style Code Diffusion [51.24656496304069]
TEx-Face(TExt & Expression-to-Face) addresses challenges by dividing the task into three components, i.e., 3D GAN Inversion, Conditional Style Code Diffusion, and 3D Face Decoding.
Experiments conducted on FFHQ, CelebA-HQ, and CelebA-Dialog demonstrate the promising performance of our TEx-Face.
arXiv Detail & Related papers (2023-12-21T15:32:49Z) - A lightweight 3D dense facial landmark estimation model from position
map data [0.8508775813669867]
We propose a pipeline to create a dense keypoint training dataset containing 520 key points across the whole face.
We train a lightweight MobileNet-based regressor model with the generated data.
Experimental results show that our trained model outperforms many of the existing methods in spite of its lower model size and minimal computational cost.
arXiv Detail & Related papers (2023-08-29T09:53:10Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Learning to Aggregate and Personalize 3D Face from In-the-Wild Photo
Collection [65.92058628082322]
Non-parametric face modeling aims to reconstruct 3D face only from images without shape assumptions.
This paper presents a novel Learning to Aggregate and Personalize framework for unsupervised robust 3D face modeling.
arXiv Detail & Related papers (2021-06-15T03:10:17Z) - Face Super-Resolution Guided by 3D Facial Priors [92.23902886737832]
We propose a novel face super-resolution method that explicitly incorporates 3D facial priors which grasp the sharp facial structures.
Our work is the first to explore 3D morphable knowledge based on the fusion of parametric descriptions of face attributes.
The proposed 3D priors achieve superior face super-resolution results over the state-of-the-arts.
arXiv Detail & Related papers (2020-07-18T15:26:07Z) - Methodology for Building Synthetic Datasets with Virtual Humans [1.5556923898855324]
Large datasets can be used for improved, targeted training of deep neural networks.
In particular, we make use of a 3D morphable face model for the rendering of multiple 2D images across a dataset of 100 synthetic identities.
arXiv Detail & Related papers (2020-06-21T10:29:36Z) - Differential 3D Facial Recognition: Adding 3D to Your State-of-the-Art
2D Method [90.26041504667451]
We show that it is possible to adopt active illumination to enhance state-of-the-art 2D face recognition approaches with 3D features.
The proposed ideas can significantly boost face recognition performance and dramatically improve the robustness to spoofing attacks.
arXiv Detail & Related papers (2020-04-03T20:17:14Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.