Rotate-and-Render: Unsupervised Photorealistic Face Rotation from
Single-View Images
- URL: http://arxiv.org/abs/2003.08124v1
- Date: Wed, 18 Mar 2020 09:54:46 GMT
- Title: Rotate-and-Render: Unsupervised Photorealistic Face Rotation from
Single-View Images
- Authors: Hang Zhou, Jihao Liu, Ziwei Liu, Yu Liu, Xiaogang Wang
- Abstract summary: We propose a novel unsupervised framework that can synthesize photo-realistic rotated faces.
Our key insight is that rotating faces in the 3D space back and forth, and re-rendering them to the 2D plane can serve as a strong self-supervision.
Our approach has superior synthesis quality as well as identity preservation over the state-of-the-art methods.
- Score: 47.18219551855583
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Though face rotation has achieved rapid progress in recent years, the lack of
high-quality paired training data remains a great hurdle for existing methods.
The current generative models heavily rely on datasets with multi-view images
of the same person. Thus, their generated results are restricted by the scale
and domain of the data source. To overcome these challenges, we propose a novel
unsupervised framework that can synthesize photo-realistic rotated faces using
only single-view image collections in the wild. Our key insight is that
rotating faces in the 3D space back and forth, and re-rendering them to the 2D
plane can serve as a strong self-supervision. We leverage the recent advances
in 3D face modeling and high-resolution GAN to constitute our building blocks.
Since the 3D rotation-and-render on faces can be applied to arbitrary angles
without losing details, our approach is extremely suitable for in-the-wild
scenarios (i.e. no paired data are available), where existing methods fall
short. Extensive experiments demonstrate that our approach has superior
synthesis quality as well as identity preservation over the state-of-the-art
methods, across a wide range of poses and domains. Furthermore, we validate
that our rotate-and-render framework naturally can act as an effective data
augmentation engine for boosting modern face recognition systems even on strong
baseline models.
Related papers
- SPARK: Self-supervised Personalized Real-time Monocular Face Capture [6.093606972415841]
Current state of the art approaches have the ability to regress parametric 3D face models in real-time across a wide range of identities.
We propose a method for high-precision 3D face capture taking advantage of a collection of unconstrained videos of a subject as prior information.
arXiv Detail & Related papers (2024-09-12T12:30:04Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - OSTeC: One-Shot Texture Completion [86.23018402732748]
We propose an unsupervised approach for one-shot 3D facial texture completion.
The proposed approach rotates an input image in 3D and fill-in the unseen regions by reconstructing the rotated image in a 2D face generator.
We frontalize the target image by projecting the completed texture into the generator.
arXiv Detail & Related papers (2020-12-30T23:53:26Z) - SMPLpix: Neural Avatars from 3D Human Models [56.85115800735619]
We bridge the gap between classic rendering and the latest generative networks operating in pixel space.
We train a network that directly converts a sparse set of 3D mesh vertices into photorealistic images.
We show the advantage over conventional differentiables both in terms of the level of photorealism and rendering efficiency.
arXiv Detail & Related papers (2020-08-16T10:22:00Z) - AvatarMe: Realistically Renderable 3D Facial Reconstruction
"in-the-wild" [105.28776215113352]
AvatarMe is the first method that is able to reconstruct photorealistic 3D faces from a single "in-the-wild" image with an increasing level of detail.
It outperforms the existing arts by a significant margin and reconstructs authentic, 4K by 6K-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2020-03-30T22:17:54Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.