HeadRecon: High-Fidelity 3D Head Reconstruction from Monocular Video
- URL: http://arxiv.org/abs/2312.08863v1
- Date: Thu, 14 Dec 2023 12:38:56 GMT
- Title: HeadRecon: High-Fidelity 3D Head Reconstruction from Monocular Video
- Authors: Xueying Wang and Juyong Zhang
- Abstract summary: We study the reconstruction of high-fidelity 3D head models from arbitrary monocular videos.
We propose a prior-guided dynamic implicit neural network to tackle these problems.
- Score: 37.53752896927615
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recently, the reconstruction of high-fidelity 3D head models from static
portrait image has made great progress. However, most methods require
multi-view or multi-illumination information, which therefore put forward high
requirements for data acquisition. In this paper, we study the reconstruction
of high-fidelity 3D head models from arbitrary monocular videos. Non-rigid
structure from motion (NRSFM) methods have been widely used to solve such
problems according to the two-dimensional correspondence between different
frames. However, the inaccurate correspondence caused by high-complex hair
structures and various facial expression changes would heavily influence the
reconstruction accuracy. To tackle these problems, we propose a prior-guided
dynamic implicit neural network. Specifically, we design a two-part dynamic
deformation field to transform the current frame space to the canonical one. We
further model the head geometry in the canonical space with a learnable signed
distance field (SDF) and optimize it using the volumetric rendering with the
guidance of two-main head priors to improve the reconstruction accuracy and
robustness. Extensive ablation studies and comparisons with state-of-the-art
methods demonstrate the effectiveness and robustness of our proposed method.
Related papers
- Decaf: Monocular Deformation Capture for Face and Hand Interactions [77.75726740605748]
This paper introduces the first method that allows tracking human hands interacting with human faces in 3D from single monocular RGB videos.
We model hands as articulated objects inducing non-rigid face deformations during an active interaction.
Our method relies on a new hand-face motion and interaction capture dataset with realistic face deformations acquired with a markerless multi-view camera system.
arXiv Detail & Related papers (2023-09-28T17:59:51Z) - FrozenRecon: Pose-free 3D Scene Reconstruction with Frozen Depth Models [67.96827539201071]
We propose a novel test-time optimization approach for 3D scene reconstruction.
Our method achieves state-of-the-art cross-dataset reconstruction on five zero-shot testing datasets.
arXiv Detail & Related papers (2023-08-10T17:55:02Z) - One-Shot High-Fidelity Talking-Head Synthesis with Deformable Neural
Radiance Field [81.07651217942679]
Talking head generation aims to generate faces that maintain the identity information of the source image and imitate the motion of the driving image.
We propose HiDe-NeRF, which achieves high-fidelity and free-view talking-head synthesis.
arXiv Detail & Related papers (2023-04-11T09:47:35Z) - A Hierarchical Representation Network for Accurate and Detailed Face
Reconstruction from In-The-Wild Images [15.40230841242637]
We present a novel hierarchical representation network (HRN) to achieve accurate and detailed face reconstruction from a single image.
Our framework can be extended to a multi-view fashion by considering detail consistency of different views.
Our method outperforms the existing methods in both reconstruction accuracy and visual effects.
arXiv Detail & Related papers (2023-02-28T09:24:36Z) - End-to-End Multi-View Structure-from-Motion with Hypercorrelation
Volumes [7.99536002595393]
Deep learning techniques have been proposed to tackle this problem.
We improve on the state-of-the-art two-view structure-from-motion(SfM) approach.
We extend it to the general multi-view case and evaluate it on the complex benchmark dataset DTU.
arXiv Detail & Related papers (2022-09-14T20:58:44Z) - Unbiased 4D: Monocular 4D Reconstruction with a Neural Deformation Model [76.64071133839862]
Capturing general deforming scenes from monocular RGB video is crucial for many computer graphics and vision applications.
Our method, Ub4D, handles large deformations, performs shape completion in occluded regions, and can operate on monocular RGB videos directly by using differentiable volume rendering.
Results on our new dataset, which will be made publicly available, demonstrate a clear improvement over the state of the art in terms of surface reconstruction accuracy and robustness to large deformations.
arXiv Detail & Related papers (2022-06-16T17:59:54Z) - H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction [27.66008315400462]
Recent learning approaches that implicitly represent surface geometry have shown impressive results in the problem of multi-view 3D reconstruction.
We tackle these limitations for the specific problem of few-shot full 3D head reconstruction.
We learn a shape model of 3D heads from thousands of incomplete raw scans using implicit representations.
arXiv Detail & Related papers (2021-07-26T23:04:18Z) - Prior-Guided Multi-View 3D Head Reconstruction [28.126115947538572]
Previous multi-view stereo methods suffer from low-frequency structures such as unclear head structures and inaccurate reconstruction in hair regions.
To tackle this problem, we propose a prior-guided implicit neural rendering network.
The utilization of these priors can improve the reconstruction accuracy and robustness, leading to a high-quality integrated 3D head model.
arXiv Detail & Related papers (2021-07-09T07:43:56Z) - PaMIR: Parametric Model-Conditioned Implicit Representation for
Image-based Human Reconstruction [67.08350202974434]
We propose Parametric Model-Conditioned Implicit Representation (PaMIR), which combines the parametric body model with the free-form deep implicit function.
We show that our method achieves state-of-the-art performance for image-based 3D human reconstruction in the cases of challenging poses and clothing types.
arXiv Detail & Related papers (2020-07-08T02:26:19Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.