JIFF: Jointly-aligned Implicit Face Function for High Quality Single
View Clothed Human Reconstruction
- URL: http://arxiv.org/abs/2204.10549v1
- Date: Fri, 22 Apr 2022 07:43:45 GMT
- Title: JIFF: Jointly-aligned Implicit Face Function for High Quality Single
View Clothed Human Reconstruction
- Authors: Yukang Cao, Guanying Chen, Kai Han, Wenqi Yang, Kwan-Yee K. Wong
- Abstract summary: Recent implicit function based methods have shown impressive results, but they fail to recover fine face details in their reconstructions.
This largely degrades user experience in applications like 3D telepresence.
We propose a novel Jointly-aligned Implicit Face Function (JIFF) that combines the merits of the implicit function based approach and model based approach.
- Score: 24.11991929558466
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: This paper addresses the problem of single view 3D human reconstruction.
Recent implicit function based methods have shown impressive results, but they
fail to recover fine face details in their reconstructions. This largely
degrades user experience in applications like 3D telepresence. In this paper,
we focus on improving the quality of face in the reconstruction and propose a
novel Jointly-aligned Implicit Face Function (JIFF) that combines the merits of
the implicit function based approach and model based approach. We employ a 3D
morphable face model as our shape prior and compute space-aligned 3D features
that capture detailed face geometry information. Such space-aligned 3D features
are combined with pixel-aligned 2D features to jointly predict an implicit face
function for high quality face reconstruction. We further extend our pipeline
and introduce a coarse-to-fine architecture to predict high quality texture for
our detailed face model. Extensive evaluations have been carried out on public
datasets and our proposed JIFF has demonstrates superior performance (both
quantitatively and qualitatively) over existing state-of-the-arts.
Related papers
- SIFU: Side-view Conditioned Implicit Function for Real-world Usable Clothed Human Reconstruction [33.03705631101124]
We introduce SIFU, a novel approach combining a Side-view Decoupling Transformer with a 3D Consistent Texture Refinement pipeline.
Uses SMPL-X normals as queries to effectively decouple side-view features in the process of mapping 2D features to 3D.
Our approach extends to practical applications such as 3D printing and scene building, demonstrating its broad utility in real-world scenarios.
arXiv Detail & Related papers (2023-12-10T11:45:45Z) - HiFace: High-Fidelity 3D Face Reconstruction by Learning Static and
Dynamic Details [66.74088288846491]
HiFace aims at high-fidelity 3D face reconstruction with dynamic and static details.
We exploit several loss functions to jointly learn the coarse shape and fine details with both synthetic and real-world datasets.
arXiv Detail & Related papers (2023-03-20T16:07:02Z) - A Hierarchical Representation Network for Accurate and Detailed Face
Reconstruction from In-The-Wild Images [15.40230841242637]
We present a novel hierarchical representation network (HRN) to achieve accurate and detailed face reconstruction from a single image.
Our framework can be extended to a multi-view fashion by considering detail consistency of different views.
Our method outperforms the existing methods in both reconstruction accuracy and visual effects.
arXiv Detail & Related papers (2023-02-28T09:24:36Z) - ReFu: Refine and Fuse the Unobserved View for Detail-Preserving
Single-Image 3D Human Reconstruction [31.782985891629448]
Single-image 3D human reconstruction aims to reconstruct the 3D textured surface of the human body given a single image.
We propose ReFu, a coarse-to-fine approach that refines the projected backside view image and fuses the refined image to predict the final human body.
arXiv Detail & Related papers (2022-11-09T09:14:11Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - AvatarMe++: Facial Shape and BRDF Inference with Photorealistic
Rendering-Aware GANs [119.23922747230193]
We introduce the first method that is able to reconstruct render-ready 3D facial geometry and BRDF from a single "in-the-wild" image.
Our method outperforms the existing arts by a significant margin and reconstructs high-resolution 3D faces from a single low-resolution image.
arXiv Detail & Related papers (2021-12-11T11:36:30Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - Fast-GANFIT: Generative Adversarial Network for High Fidelity 3D Face
Reconstruction [76.1612334630256]
We harness the power of Generative Adversarial Networks (GANs) and Deep Convolutional Neural Networks (DCNNs) to reconstruct the facial texture and shape from single images.
We demonstrate excellent results in photorealistic and identity preserving 3D face reconstructions and achieve for the first time, facial texture reconstruction with high-frequency details.
arXiv Detail & Related papers (2021-05-16T16:35:44Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - Pix2Surf: Learning Parametric 3D Surface Models of Objects from Images [64.53227129573293]
We investigate the problem of learning to generate 3D parametric surface representations for novel object instances, as seen from one or more views.
We design neural networks capable of generating high-quality parametric 3D surfaces which are consistent between views.
Our method is supervised and trained on a public dataset of shapes from common object categories.
arXiv Detail & Related papers (2020-08-18T06:33:40Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.