LPFF: A Portrait Dataset for Face Generators Across Large Poses
- URL: http://arxiv.org/abs/2303.14407v1
- Date: Sat, 25 Mar 2023 09:07:36 GMT
- Title: LPFF: A Portrait Dataset for Face Generators Across Large Poses
- Authors: Yiqian Wu, Jing Zhang, Hongbo Fu, Xiaogang Jin
- Abstract summary: We present LPFF, a large-pose Flickr face dataset comprised of 19,590 high-quality real large-pose portrait images.
We utilize our dataset to train a 2D face generator that can process large-pose face images, as well as a 3D-aware generator that can generate realistic human face geometry.
- Score: 38.03149794607065
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: The creation of 2D realistic facial images and 3D face shapes using
generative networks has been a hot topic in recent years. Existing face
generators exhibit exceptional performance on faces in small to medium poses
(with respect to frontal faces) but struggle to produce realistic results for
large poses. The distorted rendering results on large poses in 3D-aware
generators further show that the generated 3D face shapes are far from the
distribution of 3D faces in reality. We find that the above issues are caused
by the training dataset's pose imbalance.
In this paper, we present LPFF, a large-pose Flickr face dataset comprised of
19,590 high-quality real large-pose portrait images. We utilize our dataset to
train a 2D face generator that can process large-pose face images, as well as a
3D-aware generator that can generate realistic human face geometry. To better
validate our pose-conditional 3D-aware generators, we develop a new FID measure
to evaluate the 3D-level performance. Through this novel FID measure and other
experiments, we show that LPFF can help 2D face generators extend their latent
space and better manipulate the large-pose data, and help 3D-aware face
generators achieve better view consistency and more realistic 3D reconstruction
results.
Related papers
- GETAvatar: Generative Textured Meshes for Animatable Human Avatars [69.56959932421057]
We study the problem of 3D-aware full-body human generation, aiming at creating animatable human avatars with high-quality geometries and textures.
We propose GETAvatar, a Generative model that directly generates Explicit Textured 3D rendering for animatable human Avatar.
arXiv Detail & Related papers (2023-10-04T10:30:24Z) - AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image
Collections [78.81539337399391]
We present an animatable 3D-aware GAN that generates portrait images with controllable facial expression, head pose, and shoulder movements.
It is a generative model trained on unstructured 2D image collections without using 3D or video data.
A dual-camera rendering and adversarial learning scheme is proposed to improve the quality of the generated faces.
arXiv Detail & Related papers (2023-09-05T12:44:57Z) - Towards Realistic Generative 3D Face Models [41.574628821637944]
This paper proposes a 3D controllable generative face model to produce high-quality albedo and precise 3D shape.
By combining 2D face generative models with semantic face manipulation, this method enables editing of detailed 3D rendered faces.
arXiv Detail & Related papers (2023-04-24T22:47:52Z) - RAFaRe: Learning Robust and Accurate Non-parametric 3D Face
Reconstruction from Pseudo 2D&3D Pairs [13.11105614044699]
We propose a robust and accurate non-parametric method for single-view 3D face reconstruction (SVFR)
A large-scale pseudo 2D&3D dataset is created by first rendering the detailed 3D faces, then swapping the face in the wild images with the rendered face.
Our model outperforms previous methods on FaceScape-wild/lab and MICC benchmarks.
arXiv Detail & Related papers (2023-02-10T19:40:26Z) - Generating 2D and 3D Master Faces for Dictionary Attacks with a
Network-Assisted Latent Space Evolution [68.8204255655161]
A master face is a face image that passes face-based identity authentication for a high percentage of the population.
We optimize these faces for 2D and 3D face verification models.
In 3D, we generate faces using the 2D StyleGAN2 generator and predict a 3D structure using a deep 3D face reconstruction network.
arXiv Detail & Related papers (2022-11-25T09:15:38Z) - AniFaceGAN: Animatable 3D-Aware Face Image Generation for Video Avatars [71.00322191446203]
2D generative models often suffer from undesirable artifacts when rendering images from different camera viewpoints.
Recently, 3D-aware GANs extend 2D GANs for explicit disentanglement of camera pose by leveraging 3D scene representations.
We propose an animatable 3D-aware GAN for multiview consistent face animation generation.
arXiv Detail & Related papers (2022-10-12T17:59:56Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - Reconstructing A Large Scale 3D Face Dataset for Deep 3D Face
Identification [9.159921061636695]
We propose a framework of 2D-aided deep 3D face identification.
In particular, we propose to reconstruct millions of 3D face scans from a large scale 2D face database.
Our proposed approach achieves state-of-the-art rank-1 scores on the FRGC v2.0, Bosphorus, and BU-3DFE 3D face databases.
arXiv Detail & Related papers (2020-10-16T13:48:38Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.