Weakly-Supervised Multi-Face 3D Reconstruction
- URL: http://arxiv.org/abs/2101.02000v1
- Date: Wed, 6 Jan 2021 13:15:21 GMT
- Title: Weakly-Supervised Multi-Face 3D Reconstruction
- Authors: Jialiang Zhang, Lixiang Lin, Jianke Zhu, Steven C.H. Hoi
- Abstract summary: We propose an effective end-to-end framework for multi-face 3D reconstruction.
We employ the same global camera model for the reconstructed faces in each image, which makes it possible to recover the relative head positions and orientations in the 3D scene.
- Score: 45.864415499303405
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: 3D face reconstruction plays a very important role in many real-world
multimedia applications, including digital entertainment, social media,
affection analysis, and person identification. The de-facto pipeline for
estimating the parametric face model from an image requires to firstly detect
the facial regions with landmarks, and then crop each face to feed the deep
learning-based regressor. Comparing to the conventional methods performing
forward inference for each detected instance independently, we suggest an
effective end-to-end framework for multi-face 3D reconstruction, which is able
to predict the model parameters of multiple instances simultaneously using
single network inference. Our proposed approach not only greatly reduces the
computational redundancy in feature extraction but also makes the deployment
procedure much easier using the single network model. More importantly, we
employ the same global camera model for the reconstructed faces in each image,
which makes it possible to recover the relative head positions and orientations
in the 3D scene. We have conducted extensive experiments to evaluate our
proposed approach on the sparse and dense face alignment tasks. The
experimental results indicate that our proposed approach is very promising on
face alignment tasks without fully-supervision and pre-processing like
detection and crop. Our implementation is publicly available at
\url{https://github.com/kalyo-zjl/WM3DR}.
Related papers
- Learning Topology Uniformed Face Mesh by Volume Rendering for Multi-view Reconstruction [40.45683488053611]
Face meshes in consistent topology serve as the foundation for many face-related applications.
We propose a mesh volume rendering method that enables directly optimizing mesh geometry while preserving topology.
Key innovation lies in spreading sparse mesh features into the surrounding space to simulate radiance field required for volume rendering.
arXiv Detail & Related papers (2024-04-08T15:25:50Z) - Implicit Shape and Appearance Priors for Few-Shot Full Head
Reconstruction [17.254539604491303]
In this paper, we address the problem of few-shot full 3D head reconstruction.
We accomplish this by incorporating a probabilistic shape and appearance prior into coordinate-based representations.
We extend the H3DS dataset, which now comprises 60 high-resolution 3D full head scans and their corresponding posed images and masks.
arXiv Detail & Related papers (2023-10-12T07:35:30Z) - Generalizable One-shot Neural Head Avatar [90.50492165284724]
We present a method that reconstructs and animates a 3D head avatar from a single-view portrait image.
We propose a framework that not only generalizes to unseen identities based on a single-view image, but also captures characteristic details within and beyond the face area.
arXiv Detail & Related papers (2023-06-14T22:33:09Z) - A Hierarchical Representation Network for Accurate and Detailed Face
Reconstruction from In-The-Wild Images [15.40230841242637]
We present a novel hierarchical representation network (HRN) to achieve accurate and detailed face reconstruction from a single image.
Our framework can be extended to a multi-view fashion by considering detail consistency of different views.
Our method outperforms the existing methods in both reconstruction accuracy and visual effects.
arXiv Detail & Related papers (2023-02-28T09:24:36Z) - Shape, Pose, and Appearance from a Single Image via Bootstrapped
Radiance Field Inversion [54.151979979158085]
We introduce a principled end-to-end reconstruction framework for natural images, where accurate ground-truth poses are not available.
We leverage an unconditional 3D-aware generator, to which we apply a hybrid inversion scheme where a model produces a first guess of the solution.
Our framework can de-render an image in as few as 10 steps, enabling its use in practical scenarios.
arXiv Detail & Related papers (2022-11-21T17:42:42Z) - Facial Geometric Detail Recovery via Implicit Representation [147.07961322377685]
We present a robust texture-guided geometric detail recovery approach using only a single in-the-wild facial image.
Our method combines high-quality texture completion with the powerful expressiveness of implicit surfaces.
Our method not only recovers accurate facial details but also decomposes normals, albedos, and shading parts in a self-supervised way.
arXiv Detail & Related papers (2022-03-18T01:42:59Z) - Implicit Neural Deformation for Multi-View Face Reconstruction [43.88676778013593]
We present a new method for 3D face reconstruction from multi-view RGB images.
Unlike previous methods which are built upon 3D morphable models, our method leverages an implicit representation to encode rich geometric features.
Our experimental results on several benchmark datasets demonstrate that our approach outperforms alternative baselines and achieves superior face reconstruction results compared to state-of-the-art methods.
arXiv Detail & Related papers (2021-12-05T07:02:53Z) - H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction [27.66008315400462]
Recent learning approaches that implicitly represent surface geometry have shown impressive results in the problem of multi-view 3D reconstruction.
We tackle these limitations for the specific problem of few-shot full 3D head reconstruction.
We learn a shape model of 3D heads from thousands of incomplete raw scans using implicit representations.
arXiv Detail & Related papers (2021-07-26T23:04:18Z) - Inverting Generative Adversarial Renderer for Face Reconstruction [58.45125455811038]
In this work, we introduce a novel Generative Adversa Renderer (GAR)
GAR learns to model the complicated real-world image, instead of relying on the graphics rules, it is capable of producing realistic images.
Our method achieves state-of-the-art performances on multiple face reconstruction.
arXiv Detail & Related papers (2021-05-06T04:16:06Z) - PaMIR: Parametric Model-Conditioned Implicit Representation for
Image-based Human Reconstruction [67.08350202974434]
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
We show that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.
arXiv Detail & Related papers (2020-07-08T02:26:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.