Shape My Face: Registering 3D Face Scans by Surface-to-Surface
Translation
- URL: http://arxiv.org/abs/2012.09235v2
- Date: Wed, 10 Mar 2021 15:25:41 GMT
- Title: Shape My Face: Registering 3D Face Scans by Surface-to-Surface
Translation
- Authors: Mehdi Bahri, Eimear O' Sullivan, Shunwang Gong, Feng Liu, Xiaoming
Liu, Michael M. Bronstein, Stefanos Zafeiriou
- Abstract summary: Shape-My-Face (SMF) is a powerful encoder-decoder architecture based on an improved point cloud encoder, a novel visual attention mechanism, graph convolutional decoders with skip connections, and a specialized mouth model.
Our model provides topologically-sound meshes with minimal supervision, offers faster training time, has orders of magnitude fewer trainable parameters, is more robust to noise, and can generalize to previously unseen datasets.
- Score: 75.59415852802958
- License: http://creativecommons.org/licenses/by-nc-sa/4.0/
- Abstract: Standard registration algorithms need to be independently applied to each
surface to register, following careful pre-processing and hand-tuning.
Recently, learning-based approaches have emerged that reduce the registration
of new scans to running inference with a previously-trained model. In this
paper, we cast the registration task as a surface-to-surface translation
problem, and design a model to reliably capture the latent geometric
information directly from raw 3D face scans. We introduce Shape-My-Face (SMF),
a powerful encoder-decoder architecture based on an improved point cloud
encoder, a novel visual attention mechanism, graph convolutional decoders with
skip connections, and a specialized mouth model that we smoothly integrate with
the mesh convolutions. Compared to the previous state-of-the-art learning
algorithms for non-rigid registration of face scans, SMF only requires the raw
data to be rigidly aligned (with scaling) with a pre-defined face template.
Additionally, our model provides topologically-sound meshes with minimal
supervision, offers faster training time, has orders of magnitude fewer
trainable parameters, is more robust to noise, and can generalize to previously
unseen datasets. We extensively evaluate the quality of our registrations on
diverse data. We demonstrate the robustness and generalizability of our model
with in-the-wild face scans across different modalities, sensor types, and
resolutions. Finally, we show that, by learning to register scans, SMF produces
a hybrid linear and non-linear morphable model. Manipulation of the latent
space of SMF allows for shape generation, and morphing applications such as
expression transfer in-the-wild. We train SMF on a dataset of human faces
comprising 9 large-scale databases on commodity hardware.
Related papers
- Flatten Anything: Unsupervised Neural Surface Parameterization [76.4422287292541]
We introduce the Flatten Anything Model (FAM), an unsupervised neural architecture to achieve global free-boundary surface parameterization.
Compared with previous methods, our FAM directly operates on discrete surface points without utilizing connectivity information.
Our FAM is fully-automated without the need for pre-cutting and can deal with highly-complex topologies.
arXiv Detail & Related papers (2024-05-23T14:39:52Z) - SPHEAR: Spherical Head Registration for Complete Statistical 3D Modeling [39.08979926878052]
SPHEAR is an accurate, differentiable parametric statistical 3D human head model.
It can be used for automatic realistic visual data generation, semantic annotation, and general reconstruction tasks.
arXiv Detail & Related papers (2023-11-04T17:38:20Z) - Instant Multi-View Head Capture through Learnable Registration [62.70443641907766]
Existing methods for capturing datasets of 3D heads in dense semantic correspondence are slow.
We introduce TEMPEH to directly infer 3D heads in dense correspondence from calibrated multi-view images.
Predicting one head takes about 0.3 seconds with a median reconstruction error of 0.26 mm, 64% lower than the current state-of-the-art.
arXiv Detail & Related papers (2023-06-12T21:45:18Z) - Deformable Model-Driven Neural Rendering for High-Fidelity 3D
Reconstruction of Human Heads Under Low-View Settings [20.07788905506271]
Reconstructing 3D human heads in low-view settings presents technical challenges.
We propose geometry decomposition and adopt a two-stage, coarse-to-fine training strategy.
Our method outperforms existing neural rendering approaches in terms of reconstruction accuracy and novel view synthesis under low-view settings.
arXiv Detail & Related papers (2023-03-24T08:32:00Z) - Learning Neural Parametric Head Models [7.679586286000453]
We propose a novel 3D morphable model for complete human heads based on hybrid neural fields.
We capture a person's identity in a canonical space as a signed distance field (SDF), and model facial expressions with a neural deformation field.
Our representation achieves high-fidelity local detail by introducing an ensemble of local fields centered around facial anchor points.
arXiv Detail & Related papers (2022-12-06T05:24:42Z) - Learned Vertex Descent: A New Direction for 3D Human Model Fitting [64.04726230507258]
We propose a novel optimization-based paradigm for 3D human model fitting on images and scans.
Our approach is able to capture the underlying body of clothed people with very different body shapes, achieving a significant improvement compared to state-of-the-art.
LVD is also applicable to 3D model fitting of humans and hands, for which we show a significant improvement to the SOTA with a much simpler and faster method.
arXiv Detail & Related papers (2022-05-12T17:55:51Z) - Pixel2Mesh++: 3D Mesh Generation and Refinement from Multi-View Images [82.32776379815712]
We study the problem of shape generation in 3D mesh representation from a small number of color images with or without camera poses.
We adopt to further improve the shape quality by leveraging cross-view information with a graph convolution network.
Our model is robust to the quality of the initial mesh and the error of camera pose, and can be combined with a differentiable function for test-time optimization.
arXiv Detail & Related papers (2022-04-21T03:42:31Z) - Unsupervised Diffeomorphic Surface Registration and Non-Linear Modelling [4.761477900658674]
We propose a one-step registration model for 3D surfaces that internalises a lower dimensional probabilistic deformation model (PDM)
The deformations are constrained to be diffeomorphic using an exponentiation layer.
The one-step registration model is benchmarked against iterative techniques, trading in a slightly lower performance in terms of shape fit for a higher compactness.
arXiv Detail & Related papers (2021-09-28T11:47:12Z) - Locally Aware Piecewise Transformation Fields for 3D Human Mesh
Registration [67.69257782645789]
We propose piecewise transformation fields that learn 3D translation vectors to map any query point in posed space to its correspond position in rest-pose space.
We show that fitting parametric models with poses by our network results in much better registration quality, especially for extreme poses.
arXiv Detail & Related papers (2021-04-16T15:16:09Z)
This list is automatically generated from the titles and abstracts of the papers in this site.
This site does not guarantee the quality of this site (including all information) and is not responsible for any consequences.