Implicit Shape and Appearance Priors for Few-Shot Full Head
Reconstruction
- URL: http://arxiv.org/abs/2310.08784v1
- Date: Thu, 12 Oct 2023 07:35:30 GMT
- Title: Implicit Shape and Appearance Priors for Few-Shot Full Head
Reconstruction
- Authors: Pol Caselles, Eduard Ramon, Jaime Garcia, Gil Triginer, Francesc
Moreno-Noguer
- Abstract summary: In this paper, we address the problem of few-shot full 3D head reconstruction.
We accomplish this by incorporating a probabilistic shape and appearance prior into coordinate-based representations.
We extend the H3DS dataset, which now comprises 60 high-resolution 3D full head scans and their corresponding posed images and masks.
- Score: 17.254539604491303
- License: http://arxiv.org/licenses/nonexclusive-distrib/1.0/
- Abstract: Recent advancements in learning techniques that employ coordinate-based
neural representations have yielded remarkable results in multi-view 3D
reconstruction tasks. However, these approaches often require a substantial
number of input views (typically several tens) and computationally intensive
optimization procedures to achieve their effectiveness. In this paper, we
address these limitations specifically for the problem of few-shot full 3D head
reconstruction. We accomplish this by incorporating a probabilistic shape and
appearance prior into coordinate-based representations, enabling faster
convergence and improved generalization when working with only a few input
images (even as low as a single image). During testing, we leverage this prior
to guide the fitting process of a signed distance function using a
differentiable renderer. By incorporating the statistical prior alongside
parallelizable ray tracing and dynamic caching strategies, we achieve an
efficient and accurate approach to few-shot full 3D head reconstruction.
Moreover, we extend the H3DS dataset, which now comprises 60 high-resolution 3D
full head scans and their corresponding posed images and masks, which we use
for evaluation purposes. By leveraging this dataset, we demonstrate the
remarkable capabilities of our approach in achieving state-of-the-art results
in geometry reconstruction while being an order of magnitude faster than
previous approaches.
Related papers
- MonST3R: A Simple Approach for Estimating Geometry in the Presence of Motion [118.74385965694694]
We present Motion DUSt3R (MonST3R), a novel geometry-first approach that directly estimates per-timestep geometry from dynamic scenes.
By simply estimating a pointmap for each timestep, we can effectively adapt DUST3R's representation, previously only used for static scenes, to dynamic scenes.
We show that by posing the problem as a fine-tuning task, identifying several suitable datasets, and strategically training the model on this limited data, we can surprisingly enable the model to handle dynamics.
arXiv Detail & Related papers (2024-10-04T18:00:07Z) - Hyper-VolTran: Fast and Generalizable One-Shot Image to 3D Object
Structure via HyperNetworks [53.67497327319569]
We introduce a novel neural rendering technique to solve image-to-3D from a single view.
Our approach employs the signed distance function as the surface representation and incorporates generalizable priors through geometry-encoding volumes and HyperNetworks.
Our experiments show the advantages of our proposed approach with consistent results and rapid generation.
arXiv Detail & Related papers (2023-12-24T08:42:37Z) - Wonder3D: Single Image to 3D using Cross-Domain Diffusion [105.16622018766236]
Wonder3D is a novel method for efficiently generating high-fidelity textured meshes from single-view images.
To holistically improve the quality, consistency, and efficiency of image-to-3D tasks, we propose a cross-domain diffusion model.
arXiv Detail & Related papers (2023-10-23T15:02:23Z) - InstantAvatar: Efficient 3D Head Reconstruction via Surface Rendering [13.85652935706768]
We introduce InstantAvatar, a method that recovers full-head avatars from few images (down to just one) in a few seconds on commodity hardware.
We present a novel statistical model that learns a prior distribution over 3D head signed distance functions using a voxel-grid based architecture.
arXiv Detail & Related papers (2023-08-09T11:02:00Z) - End-to-End Multi-View Structure-from-Motion with Hypercorrelation
Volumes [7.99536002595393]
Deep learning techniques have been proposed to tackle this problem.
We improve on the state-of-the-art two-view structure-from-motion(SfM) approach.
We extend it to the general multi-view case and evaluate it on the complex benchmark dataset DTU.
arXiv Detail & Related papers (2022-09-14T20:58:44Z) - Learned Vertex Descent: A New Direction for 3D Human Model Fitting [64.04726230507258]
We propose a novel optimization-based paradigm for 3D human model fitting on images and scans.
Our approach is able to capture the underlying body of clothed people with very different body shapes, achieving a significant improvement compared to state-of-the-art.
LVD is also applicable to 3D model fitting of humans and hands, for which we show a significant improvement to the SOTA with a much simpler and faster method.
arXiv Detail & Related papers (2022-05-12T17:55:51Z) - H3D-Net: Few-Shot High-Fidelity 3D Head Reconstruction [27.66008315400462]
Recent learning approaches that implicitly represent surface geometry have shown impressive results in the problem of multi-view 3D reconstruction.
We tackle these limitations for the specific problem of few-shot full 3D head reconstruction.
We learn a shape model of 3D heads from thousands of incomplete raw scans using implicit representations.
arXiv Detail & Related papers (2021-07-26T23:04:18Z) - Neural Geometric Level of Detail: Real-time Rendering with Implicit 3D
Shapes [77.6741486264257]
We introduce an efficient neural representation that, for the first time, enables real-time rendering of high-fidelity neural SDFs.
We show that our representation is 2-3 orders of magnitude more efficient in terms of rendering speed compared to previous works.
arXiv Detail & Related papers (2021-01-26T18:50:22Z) - Next-best-view Regression using a 3D Convolutional Neural Network [0.9449650062296823]
We propose a data-driven approach to address the next-best-view problem.
The proposed approach trains a 3D convolutional neural network with previous reconstructions in order to regress the btxtposition of the next-best-view.
We have validated the proposed approach making use of two groups of experiments.
arXiv Detail & Related papers (2021-01-23T01:50:26Z) - Weakly-Supervised Multi-Face 3D Reconstruction [45.864415499303405]
We propose an effective end-to-end framework for multi-face 3D reconstruction.
We employ the same global camera model for the reconstructed faces in each image, which makes it possible to recover the relative head positions and orientations in the 3D scene.
arXiv Detail & Related papers (2021-01-06T13:15:21Z) - Towards Reading Beyond Faces for Sparsity-Aware 4D Affect Recognition [55.15661254072032]
We present a sparsity-aware deep network for automatic 4D facial expression recognition (FER)
We first propose a novel augmentation method to combat the data limitation problem for deep learning.
We then present a sparsity-aware deep network to compute the sparse representations of convolutional features over multi-views.
arXiv Detail & Related papers (2020-02-08T13:09:11Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.